Welcome to Technology Short Take #9, the last Technology Short Take for 2010. In this Short Take, I have a collection of links and articles about networking, servers, storage, and virtualization. Of note this time around: some great DCI links, multi-hop FCoE finally arrives (sort of), a few XenServer/XenDesktop/XenApp links, and NTFS defragmentation in the virtualized data center. Here you go—enjoy!
Networking
- Brad Hedlund has a great post discussing Nexus 7000 connectivity options for Cisco UCS. I’ll include it in this section since it focuses more on the networking aspect rather than UCS. I haven’t had the time to read the full PDF linked in Brad’s article, but the other topics he discusses in the post—FabricPath networks, F1 vs. M1 linecards, and FCoE connectivity—are great discussions. I’m confident the PDF is equally informative and useful.
- This UCS-specific post describes how northbound Ethernet frame flows work. Very useful information, especially if you are new to Cisco UCS.
- Data Center Interconnect (DCI) is a hot topic these days considering that it is a key component of long-distance vMotion (aka vMotion at distance). Ron Fuller (who I had the pleasure of meeting in person a few weeks ago, great guy), aka @ccie5851 on Twitter and one of the authors of NX-OS and Cisco Nexus Switching: Next-Generation Data Center Architectures (available from Amazon), wrote a series on the various available DCI options such as EoMPLS, VPLS, A-VPLS, and OTV. If you’re considering DCI—especially if you’re a non-networking guy and need to understand the impact of DCI on the networking team—this series of articles is worth reading. Part 1 is here and part 2 is here.
- And while we are discussing DCI, here’s a brief post by Ivan Pepelnjak about DCI encryption.
- This post was a bit deep for me (I’m still getting up to speed on the more advanced networking topics), but it seemed interesting nevertheless. It’s a how-to on redistributing routes between VRFs.
- Optical or twinax? That’s the question discussed by Erik Smith in this post.
- Greg Ferro also discusses cabling in this post on cabling for 40 Gigabit and 100 Gigabit Ethernet.
Servers
- As you probably already know, Cisco released version 1.4 of the UCS firmware. This version incorporates a number of significant new features: support for direct-connected storage, support for incorporating C-Series rack-mount servers into UCS Manager (via a Nexus 2000 series fabric extender connected to the UCS 61×0 fabric interconnects), and more. Jeremy Waldrop has a brief write-up that lists a few of his favorite new features.
- This next post might only be of interest to partners and resellers, but having been in that space before joining EMC I fully understand the usefulness of having a list of references and case studies. In this case, it’s a list of case studies and references for Cisco UCS, courtesy of M. Sean McGee (who I hope to meet in person in St. Louis in just a couple of weeks).
Storage
- This could go in either the Networking or Storage sections, but I was alerted via Twitter (can’t remember who pointed this out, sorry) that the NX-OS 5.0(2)N2(1) release includes support for VE Ports (see the “New and Changed Features” section of the release notes). VE Ports are important because they enable you to interconnect multiple Fibre Channel Forwarders (FCFs) together over a lossless Ethernet fabric. In other words, they enable multi-hop FCoE. Obviously, you’d need support for VE Ports on all interconnected FCFs, which today means multi-hop FCoE with Nexus 5000 series switches.
- I wrote a piece on configuring SAN port channels from a Nexus to an MDS a short while ago, and now here’s a post on configuring SAN port channels from a Nexus when in NPV mode. Good stuff.
- Here’s a post I found that discusses bad flash recovery and HDD replacement on an Iomega. Not exactly data center-class equipment, I know, but I also know that a great many readers and fellow geeks have this sort of equipment in their home labs. This might be useful information.
- I think it was Nigel Poulton who mentioned this post by Hu Yoshida regarding Hitachi’s “global pool of processors” approach with the VSP. Based on what I read there, it sounds like the VSP and the VMAX share some architectural similarities in this regard. If I’m incorrect in that regard, I’d love to hear your thoughts why in the comments.
- Fellow vSpecialist Clint Kitson has a good discussion of LUN trespassing and VMware multipathing over at the Everything VMware at EMC community.
- Richard Anderson has a comparison of using VPLEX for SQL database resiliency versus SQL database mirroring.
Virtualization
- Using XenServer and need to support multicast? Look to this article for the information on how to enable multicast with XenServer.
- A couple of colleagues over at Intel (I worked with Brian on one of his earlier white papers) forwarded me the link to their latest Ethernet virtualization white paper, which discusses the use of 10 Gigabit Ethernet with VMware vSphere. You can find the link to the latest paper in this blog entry.
- Bhumik Patel has a good write-up on the “behind-the-scenes” technical details that went into the Cisco-Citrix design guides around XenDesktop/XenApp on Cisco UCS. Bhumik provides the details on things like how many blades were using in the testing, what the configuration of the blades was, and what sort of testing was performed.
- Thinking of carving your storage up into guest OS datastores for VMware? You might want to read this first for some additional considerations.
- I know that this has seen some traffic already, but I did want to point out Eric Sloof’s post on the Xenoss XenPack for ESXTOP. I haven’t had the opportunity to use it yet, but would certainly love to hear from anyone who has. Feel free to share your experiences in the comments.
- As is usually the case, Duncan Epping has had some great posts over the last few weeks. His post on shares set on resource pools highlights the need to adjust the shares value (and other resource constraints) based on the contents of the pool, something that many people forget to do. He also provides a breakdown of the various vCenter memory statistics, and discusses an issue with binding a Provider vDC directly to an ESX/ESXi host.
- PowerCLI 4.1.1 has some improvements for VMware HA clusters which are detailed in this VMware vSphere PowerCLI Blog entry.
- Frank Denneman has three articles which have caught my attention over the last few weeks. (All his stuff is good, by the way.) First is his two-part series on the impact of oversized virtual machines (part 1 and part 2). Some of the impacts Frank discusses include memory overhead, NUMA architectures, shares values, HA slot size, and DRS initial placement. Apparently a part 3 is planned but hasn’t been published yet (see some of the comments in part 2). Also worth a read is Frank’s recent post on node interleaving.
- Here’s yet another tool in your toolkit to help with the transition to ESXi: a post by Gabe on setting logfile location, swap file, SNMP, and vmkcore partition in ESXi.
- Here’s another guide to creating a bootable ESXi USB stick (on Windows). Here’s my guide to doing it on Mac OS X.
- Jon Owings had an idea about dynamic cluster pooling. This is a pretty cool idea—perhaps we can get VMware to include it in the next major release of vSphere?
- Irritated that VMware disabled copy-and-paste between the VM and the vSphere Client in vSphere 4.1? Fix it with these instructions.
- This white paper on configuration examples and troubleshooting for VMDirectPath was recently released by VMware. I haven’t had the chance to read it yet, but it’s on my “to read” list. I’ll just have a look at that in my copious free time…
- David Marshall has posted on VMblog.com a two-part series on how NTFS causes I/O bottlenecks on virtual machines (part 1 and part 2). It’s a great review of NTFS and how Microsoft’s file system works. Ultimately, the author of the posts (Robert Nolan) sets the readers up for the need for NTFS defragmentation in order to reduce the I/O load on virtualized infrastructures. While I do agree with Mr. Nolan’s findings in that regard, there are other considerations that you’ll also want to include. What impact will defragmentation have on your storage array? For example, I think that NetApp doesn’t recommend using defragmentation in conjunction with their storage arrays (I could be wrong; can anyone confirm?). So, I guess my advice would be to do your homework, see how defragmentation is going to affect the rest of your environment, and then proceed from there.
- Microsoft thinks that App-V should be the most important tool in your virtualization tool belt. Do you agree or disagree?
- William Lam has instructions for how to identify the origin of a vSphere login. This might not be something you need to do on a regular basis, but when you do need to do it you’ll be thankful you have the instructions how.
I guess it’s time to wrap up now, since I have likely overwhelmed you with a panoply of data center-related tidbits. As always, I encourage your feedback, so please feel free to speak up in the comments. Thanks for reading!
This article was originally posted on blog.scottlowe.org. Visit the site for more information on virtualization, servers, storage, and other enterprise technologies.
Technology Short Take #9
Similar Posts:
Tidak ada komentar:
Posting Komentar