Jan
10
Filed Under (Geekspeak, Work) by Justin on 2012-01-10

As part of moving our production server environment to a colo facility and the coinciding upgrade from ESX 4.1 (fat) to ESXi 5, I get to basically rebuild my entire vSphere environment from the ground up. It’s a great opportunity as I’ve definitely learned a lot over the past 3 years or so of using VMware on a regular basis and I’ve been itching to change some things that I’ll hopefully go in to in some posts later on in this process.

My task today is nailing down my network configurations. I’ve got 8 NICs total at my disposal in each of my Dell R710 servers – the four embedded Broadcom 5709 (2 separate dual-port controllers by design) and an additional four on an add-in Intel I340-T4. I want to make the iSCSI as fast as possible and the rest of the networking as redundant as possible. I’ve not bonded ports in my vSphere config before, but thinking that’s where I want to go at least with the production network side.

I have some ideas already, but I’m curious – what would YOU do?

(6) Comments    Read More   

Comments

Travis Phipps on 10 January, 2012 at 11:16 am #

Here’s one tidbit: You don’t HAVE to change anything when you do an ESX 4.1 to ESXi 5 upgrade. I know that sounds crazy, but when you pop in the install CD for ESXi 5, it will ask if you want to preserve your ESX 4.1 settings. And it actually works.

Now, I’m not condoning this for your case, just wanted to put this info out there.


Justin on 10 January, 2012 at 11:20 am #

Yeah, I’m aware of the upgrade option, but I really want to take this opporotunity to rebuild and make things better. I’m also going from “Installed” on my local 2x250GB RAID1 SATA to the embedded flash module for the hypervisor. Pretty sure I can’t do that with the upgrade (at least not directly).


ebuford on 10 January, 2012 at 11:54 am #

Use the broadcom for management… The Intel is what you want to use for your ISCSI connections….


DW Hunter on 10 January, 2012 at 11:59 am #

Part of our best practices are to NEVER mix NICs between broadcom and intel. I remember back in the ESX3 and 3.5 days it was a “common thing” because servers didn’t have the port density options they do today. I also ran into significant issues (personally) with the various drivers and vSwitch performance by going across models/brands of physical switches. Keep them separate.

So, if it were me, I would do it this way. I would take 2x BCom for your iScsi (one per 2-port controller so you’re getting controller redundancy). I would take 2x BCom for your Data (one per 2-port controller so you’re getting controller redundancy. I would take 2x Intel to segment off your vMotion traffic. On the “data” vSwitch, I would “trunk them” on the Switch side, using a native vlan of ‘999’ (or some other unused and irrelevant number for untagged traffic). I would then utilize port groups for the different “types” of traffic you’ve got. Even if today you only have “server VLAN” traffic, I would still “trunk/tag” on the switch and port-group on the vSwitches. This gives you the best mix (in my opinion) of speed and redundancy for your requirements. Then, later on, you’ve got 2x more physical NICs on the Intel you can utilize for additional Data vSwitches (maybe for a DMZ, or VDI, or some other atypical for future growth situation).

My $.02 which may not even be worth that much 🙂

–DW


DW Hunter on 10 January, 2012 at 2:11 pm #

Hey Ed, have you had particular issues with iSCSI on Broadcom? I was just curious. I’ve heard others in the past say “iSCSI on Intel” only, but that hasn’t been our experience. Not trying to derail, just curious.

–DW


Venky on 16 January, 2012 at 1:12 am #

Recently I posted a blog talking about the 8 1 Gig deployment. Please take a look.
http://blogs.vmware.com/networking/2011/11/vds-best-practices-rack-server-deployment-with-eight-1-gigabit-adapters.html
-Venky


Post a Comment
Name:
Email:
Website:
Comments: