Jump to content

Welcome to Geeks to Go - Register now for FREE

Need help with your computer or device? Want to learn new tech skills? You're in the right place!
Geeks to Go is a friendly community of tech experts who can solve any problem you have. Just create a free account and post your question. Our volunteers will reply quickly and guide you through the steps. Don't let tech troubles stop you. Join Geeks to Go now and get the support you need!

How it Works Create Account
Photo

10GbE Intel x540-t1 under performing

10GbE Network

  • Please log in to reply

#1
strikermed

strikermed

    Member

  • Member
  • PipPipPip
  • 230 posts

I edited my original post with the most up to date information below:

Hello,

 

I am direct connecting windows server 2012 R2 Essentials with a Windows 10 PC. I'm having trouble with transfer speeds. I'm finding my cap at about 200MB/s speeds (2.0 roughly gigabits). enabling jumbo frames and all other options that Linus and the cinevate blog covers, I'm only getting 300MB/s speeds (2.8 roughly gigabits). I'm getting pour performance and I've spent 2 days trying to discover the issue. If anyone has any input, I would truly appreciate it.

 

I'm utilizing 2 intel x540T1 NIC.  To give you an idea of my network configuration.  My Server NIC is 192.168.2.1 and my Windows 10 PC is 192.168.2.2.  My 1GbE network is on the default gateway 192.168.1.1.  I'm pretty sure I have this portion of it configured correctly, as I can transfer files back and forth between RAM drives, and I'm able to get a reading with ntttcp.  ntttcp gave me a throughput of 1128MB/s and when I turned on Jumbo frames it gave me a reading of 1048MB/s.  So the speed is there, I'm just wondering what the bottleneck is, or if I need to do some more configuration.

 

Thanks in advance.

Windows Server 2012 R2 Essentials:
AMD Phenom x4 955
Asus M5a99FX Pro R2.0
32GB RAM
Intel x540T1 in the main PCI-Express 16x slot (Blue)(removed the video card and remote access)
LSI MegaRAID 9266-8i in the second PCI-Express 16x Slot (Blue)
Configured 20GB of RAM into a RAM Drive to test transfers
Note: I get 80% CPU utilization when I get 300MB/s Transfers

Windows 10 PC
Intel i7 3930 six core
Sabertooth x79 motherboard
32GB RAM
GTX 580 in first PCI Express 16x slot
Intel x540T1 in the second PCI-Express 16x slot
Configured 20GB of RAM into a RAM Drive to test transfers
NOTE: I get 4% CPU utilization when I get 300MB/s Transfers

If anyone can detect my bottleneck, or have an info on configuration, I'm all ears as I have tried and tried with this set up. I'm hoping to utilize my server as a mass storage to work off of for video content. I would really like to saturate the RAID that I have in mind for the future, but first i need to prove that I can get atleast 750 MB/s or even 1GB/s speeds.


Edited by strikermed, 15 December 2015 - 09:38 AM.

  • 0

Advertisements


#2
SpywareDr

SpywareDr

    Member 3k

  • Member
  • PipPipPipPipPipPip
  • 3,996 posts

How are the two computers connected?


  • 0

#3
strikermed

strikermed

    Member

  • Topic Starter
  • Member
  • PipPipPip
  • 230 posts

They are connected via Cat6 Cable.  Getting only 2.8gigabit transfers.  300MB/s max.


  • 0

#4
strikermed

strikermed

    Member

  • Topic Starter
  • Member
  • PipPipPip
  • 230 posts

I hite 500MB/s but only for half of a transfer.  I had 3 Large vidoe files copy via RAM Disks from PC to Server and I road more than half the copy at 300MB/s and then it shot up to 500MB/s with the final file.  When I copy from the Server to my PC I get 300MB/s throughout with the same 3 files.


  • 0

#5
SpywareDr

SpywareDr

    Member 3k

  • Member
  • PipPipPipPipPipPip
  • 3,996 posts

Feasibility Study on High Speed Transmission over UTP Cables > IEEE Meeting July 03

Page 6

  • There is less than 10Gbps capacity on a nominal 100m Cat5e and Cat6 channels
  • Capacity calculations with measured data indicate that 10 Gigabit data transmission over 100m Cat 5e or Cat 6 UTP cable plant as specified in ISO 11801 is not feasible
  • Alien Cross Talk is a fundamental limit.
    - There has been no contribution that shows Alien Cross Talk can be canceled
  • Elimination of cable bundling practices will be required for UTP cables to mitigate Alien Cross Talk
  • Unrestricted installation can only be accomplished with a new higher quality cable

Page 17
Summary

  • 10Gbase-T on 100m Cat5e and Cat6 is not feasible
  • 2.5Gbps is feasible on 100m Cat5e per ISO 11801 spec
  • 5Gbps is feasible on 100m Cat6 with specified ANEXT

 

http://www.experts-e...t-Ethernet.html

https://www.google.c...q=cat6a vs cat6

 

 


  • 0

#6
strikermed

strikermed

    Member

  • Topic Starter
  • Member
  • PipPipPip
  • 230 posts
Now that I have a Cat6a cable plugged in I haven't seen much of any change.

I've made a few tweaks and now I'll unreliably get 50/50 chance for 500MB/s to my Windows 10 client.

Here is an update and my I overall status:

Hello, I'm having trouble with getting sustainable file transfer performance RAM Disk to RAM Disk.

My set up is 2 Intel x540 T1 direct connected via Cat6a cables 3 feet in length. My systems are as follows:
Windows 10
Intel i7 3930 (six core)
Asia Sabertooth x79
32GB DDR 3 memory
Samsung 850 pro 500GB
GTX 580
Intel x540 t1 (in second 16x slot)

Server 2012 r2 essentials
AMD phenom x4 955 (quad core)
M5A99FX PRO
32 DDR 3 memory
Scan disk pro 240 GB
Intel X540 t1 (first 16x slot)

I've done both iperf and ntttcp testing to verify that I'm capable of getting over 1100MB/s.

My issue is actual file transfers. I enabled jumbo frames to their max matching the two machines, I've maxed all performance options, and I've messed with queues on each card finding that the max 16 Queues actually supplied the best performance. I even made changes to the queues core count setting on the server which I found had no change in performance.

I started with getting 300MB/s transfers back and forth from the server which was a let down for a 10gb dream. Somewhere in adjusting queues I got 500MB/s from the Windows 10 client to the server with a max of 300MB/s transfers from the server to the Windows 10 client. When looking at cup utilization I saw that when I wrote 500MB/s to the server, server utilization was in the 90% range with the "system" process hogging almost 80% of the cpu. On my Windows 10 client I saw less than 10% utilization. Now wen I transferred around 300MB/s to the Windows client I saw half the CPU utilization on the server and the the same under 10% on the Windows 10 client.

Now here's the exciting part. I went into my Windows client and changed the auto detect feature under link speed to 10Gb and I saw both 500MB/s to the server and to the Windows 10 client. I went to do the same on the server, but the option is greyed out not letting me adjust anything under link speed. Also, I'm not seeing consistent results. When ever I hit the 500 MB/s transfers cup usage on the server is in the 90% range and again less than 20% on the Windows 10 side. The thing is I don't always get those 500MB/s speeds.

I can do back to back copies with about 15 second in between. One from the client to the server, I'll get 500MB/s. The. Again from the server to the client at 500MB/s. I'll wait 15-30 seconds and do the same exact procedure. Most times I'll get 500MB/s and the occasional 300MB/s going to the server, but I'll constantly get 280-300MB/s going to the client. It's 50/50 whether I get 500MB/s or not.

Not only this, I'm still not hitting potential speeds! In fact, I'm getting half.

Here are my questions:

Is anyone else experiencing a similar issue?

Do you have similar hardware?

Will more cores (say an i7 3930) be better suited for the server that is only going to act as a NAS?

Does anyone recognize any bottlenecks?
(Forgot to mention that I never ran low on RAM even with the RAM drives)

Are there any driver tweaks out there that anyone can suggest?

I'm using the most current drivers for the NIC's is there a better one out there that functions with better performance?

Any other advice would be greatly appreciated, I'm new to networking, as I'm utilizing this in a home system for learning purposes, hence the simplistic setup, yet complicated issues.

Thanks ahead of time!
  • 0

#7
SpywareDr

SpywareDr

    Member 3k

  • Member
  • PipPipPipPipPipPip
  • 3,996 posts

http://arstechnica.c...c.php?t=1115281


  • 0

#8
strikermed

strikermed

    Member

  • Topic Starter
  • Member
  • PipPipPip
  • 230 posts

That link wasn't much help, but I'll tell everyone who is interested in how I found more performance with my 10GbE connection...  I haven't quite got 1GB/s, but I'm getting closer.

 

I'll start with a tidbit about WIndows Server 2012 R2 Essentials or non-Essentials for that matter.  There are 3 settings I believe in the permissions area which I utilized these instructions to remove encryption and packet signing which ate up a LOT of CPU usage, thus causing my server to be a bottleneck.

 

Beyond removing encryption, I was seeing much, much better results.

 

From there I did driver tuning (mind you all Anti Virus and firewalls are disabled.  Windows firewall on the server side is very restrictive, and just easiest to disable, then configure later.).  I started with Jumbo frames, and maxed them out.

 

Moved on to RSS processors (on the server side) and assigned it to the logical cores I have available "4"

 

next did I changed RSS Queues on both the server and PC to their number of logical cores.  On the server I changed it to 4 bc I had a quad core amd cpu without hyperthreading.  With the PC I changed it to 8.  I have 6 CPU cores running at hyperthreading leaving me wiht 12 logical cores.  Unfortunately 8 is the closest I could get without going over.

 

Next I worked on the Performance functions.  The recieve and transmit buffers.  i maxed both these out on both machines.

 

After this I got Ram to Ram disk transfers of 500-750MB/s 

 

I'm doing some additional testing.  The main test I'm running is seeing if my CPU core count in my server is the limiting factor.  Since it can only send 4 queues I think it limits my better processor in my PC from performing better as well.  I'm running those tests, and I'll report back in what I find. 

 

To give you reference I run a intel 3930K in my PC, I was able to borrow the same processor and the same chipset motherboard from work.  This shoudl give me a good idea and eliminate either processor from being a bottleneck for each other.  If I had more cores I could moste likely eliminate the RSS Queues being set to 8 as a limitor as well.  But unfortunately I can't do that :(.

 

Wish me luck!

 


  • 0

#9
strikermed

strikermed

    Member

  • Topic Starter
  • Member
  • PipPipPip
  • 230 posts
It turns out the number of logical cores does effect the speed in 10GbE network cards. To get 1GB/s speeds you need more than 4 (got max speed of 750 and average of 600MB/s) with hexicore Intel cpu's I got in excess of 1GB/s transfers with RAM drives.
  • 0






Similar Topics

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users

As Featured On:

Microsoft Yahoo BBC MSN PC Magazine Washington Post HP