If you’re looking for the ultimate in computational density, consider the HP
C7000 Platinum Blade Server Chassis. The C7000 packs an enormous amount of power
into just 10U of rack space, with modular components that can be almost
instantly changed out.
Inside the blade chassis, HP has inserted a mezzanine backplane that moves at 7.1Tbps in aggregate. In turn, HP’s FlexFabric connects to backplane components that fan out data in connections of up to 40Gbps per channel, and can be aggregated as control units of four blade servers—conveniently the number that will fit in a 42U rack, provided you’ve pleased the power company and your data center has the correct load bearing floors.
At $122,000, the unit we tested was a monster, sporting two blades plus the FlexFabric connectivity gear.
One point to keep in mind. Once you go the blade chassis route, you’re locked in to HP as your vendor. We could find no one else that sells products for this chassis, so a purchase means a marriage with HP. Of course, the same applies to nearly all blade system designs. HP offers a three-year, across the board warranty once you say the vows.
Testing HP’s C7000 blade chassis was an exercise in both server geometries, and also in hunting down strange configuration data. The base software management options to configure the beast that is the C7000 chassis require enormous amounts of sophistication.
Rather then use the default options, we recommend buying the OneView 1.1 management app. In fact, beg for it if you’re managing several HP blade server chasses.
We like the C7000 chassis and especially the blades, BL-660 Gen8s with a lot of computational power, but it’s the back side of the chassis that packs the most punch with HP’s flexible Virtual Connect option.
Network operations center personnel will enjoy the rapid deployment of the hardware, but there were a few things missing in our opinion when we compared standard and optional administration software packages for the chassis. What’s there works, and well. What’s missing is mildly frustrating.
What We Tested
We tested the C7000 “Platinum Chassis” which, in and of itself, is comparatively inexpensive. The chassis is powered via 208-240v 30amp+ connections, and there are six power supplies; HP sent a Power Distribution Unit (PDU) and we recommend using one for fast power supply change-out should something go awry.
Two full-height HP BL-660 Gen8 blades were fitted to it. The basic blades are not quite $28,000 each (memory not included). Two HP Virtual Connect FlexFabric 20/40Gb F8 modules were installed, each nearly $23,000. Eight full-height blades can be installed, or a combo of 16 half-height blades.
Blades and the Virtual Connect components together, salted with a bit of memory (256GB) made the price climb into six figures. Admittedly, what we received can also be the administrative core for up to four total chassis, which won’t be quite as expensive when the cost is amortized over subsequent chassis and blades. The density can be huge. The chassis fan count—10 of them—was also huge. This density needs to breathe.
The BL-660s Gen 8s blades came with Intel E5-4650 CPUs, 4CPU x 8-core, total 32 cores, 128GB memory (the minimum, 512G possible), and two drives each. We found we could easily boot from SAN or network resources. HP’s VC FlexFabric 20/40 F8 Module was installed into the rear of the C7000 Chassis, along with HP FlexFabric 20Gb 2-port 630FLB Series adapters.
The chassis and components that fill the C7000 are joined together into a mezzanine backplane as mentioned, and blades communicate among each other, or with network communications options installed in the rear of the chassis. The chassis monitors the front and backplanes via both external software that talks to internal chassis firmware, and can be directly controlled through a front-panel color LCD control panel or rear-mounted display.
Provisioning of the chassis components (blades, switches, and their configuration) is done remotely. There are no USB jacks on the front of the chassis and the blades have no jacks. There is one VGA port on the rear, and one USB jack that can mirror the front panel display. We jacked into the back of the chassis with our crash cart, and discovered that the crash cart KVM version of the chassis firmware-driven software doesn’t do much more than the front panel display. But you’ll need one or the other to set the chassis IP address.
Virtual Connect Manager vs OneView
The C7000 is useless without control software. There are two basic choices. HP’s Virtual Connect Manager is included in the cost of the chassis. Also included with all Gen8 servers is HP’s integrated LightsOut (iLO) management.
HP’s OneView might be a better option. It’s a broader management package for Gen8 servers, although it’s not inexpensive.
We recommend those deploying the C7000 use OneView rather than Virtual Connect Manager for several reasons: it’s archaic and requires studious prerequisite study just to install the software onto highly configured Windows 2008 R2 server, needs 6GB of user memory and therefore is notebook unfriendly, and requires much architectural forethought to deploy into an effective control plane for provisioning and managing the options of the C7000 chassis and its components.
By contrast, OneView 1.1 is a virtual machine delivered as an appliance for VMware or HyperV, installs rapidly, and after a fast tutorial, becomes instantly manageable. It’s not inexpensive. The trade-off is in installation hours spent, and manageability. OneView has the ability to discover much infrastructure and when supplied with chassis password components, is able to connect and logically associate components quickly and with surprisingly little fuss.
There is connectivity between both applications and systems management applications such as Microsoft’s SystemCenter, but these were not tested.
To summarize, the default package can be used, but HP recommended OneView, and we concur. That HP doesn’t include it with a substantial purchase of blades frustrates us. Any data center deploying many chassis can’t live without it.
Blades and FlexFabric
The twin BL 660 Gen8 blades that we were sent had 32 cores on four Xeon CPUs. We’ve seen this combination before and it’s fast and solid. Across the chassis with eight of these installed, 256 cores are possible. If a rack supports four full chassis, that’s 1,024 cores per 42U rack, yielding huge density. Old timers will remember when there was one 32bit CPU per tower server.
The connectivity options for FlexFabric are numerous and the fabric is controlled via the configurations set by Virtual Connection Manager or OneView. The HP Virtual Connect FlexFabric-20/40 F8 module we used replaces internal switches in former HP blade chassis. It supports 16- 10G/20GB downlinks to the chassis midplane bus, two 20GB cross connect links, four 40GB uplinks, eight Flexports uplinks, plus a link to the Onboard Administrator module (the chassis firmware app). The Flexports can be either Fibre Channel (2/4/8GB) or Ethernet (10/1GB).
Each blade can have logical network interface card (NIC) connections depending on the blade type, typically two NIC logical connections for a half-height and three for a full-height blade. These attach to the mezzanine plane, and it’s here that IP traffic can be separated internally from iSCSI or FCoIP disk traffic, or in another design alternative, one tenant’s traffic from another, or perhaps a Content Development Network (CDN) from the Hadoopers.
Amusingly, the extreme data rates of the FlexFabric infrastructure also mean cable length concerns—at 40GBps proximity becomes an issue when establishing boundaries between chassis—occur certainly with copper cables but even with fiber.
The density also means that the FlexFabric options chosen become/replace traditional hardware core routers and switches that once performed the tasks among what would have been the huge stack of discrete servers, or aisles of servers in racks and its network switching demarc boundaries.
FlexFabric’s options and construction mandates inter-disciplinary imaginative construction and another reason why OneView trumps Virtual Connection Manager for fabric control, as OneView integrates these at the mezzanine/midplane level more understandably.
We appreciated the multiple views of OneView fabric tracking, and how it relates each element of the infrastructure to each other. It’s not a finished product, as it has difficulty showing sophisticated logical and protocol relationships in and among the objects, and this would help systems engineers understand flows, and network engineers understand the hardware and aggregation relativity more simply.
Whether we wanted it or not, we felt as though we should get a degree from HP after going through the exercise of understanding the relationships that we were constructing.
Testing
Our initial installation went smoothly. The front panel LED provides a lot of information about the chassis, and errors that the firmware picks up, like unplugged cables, and cooling problems. It’s a fast way to get localized information on low-level problems, but it’s not as sophisticated as even a dull smartphone as a user interface. The KVM jack supplied us with a web page, but not that much more control capability. Control comes from Virtual Connect Manager and/or OneView or Insight Manager.
We have nothing that can assault this chassis at full bore. It’s in aggregate, an enormous block of computational and I/O capability. The sum of its parts when viewed discretely however, is powerful—blades whose other Gen8 cousins we’ve tested coupled to a huge L2/L3 switch backplane.
The BL-660 Gen8 blades digested our VMware licenses with glee. The speeds of digestion were comparable to the HP DL-580 Gen8 we recently tested. The flow of data through the blades with VMware’s VNICs was easily controlled via defaults in OneView, then in VMware vCenter with ESXi 5.5. Where we once had difficulty with VMware drivers finding HP hardware correctly, we had no issues this round.
Summary
The C7000 Platinum Chassis, coupled to the HP-supplied BL660 Gen8 blades supplies huge computational density. The FlexFabric approach localizes all systems I/O to a mid-plane, then logically connects multiple blades and chassis together through the fabric.
This architecture replaces discrete or 1U servers, external switch and router cabinets, separate fabric to SAN data stores, and all of the logic needed to glue these pieces together.
Going the blade chassis router, one becomes entirely captive to HP in this infrastructure, but it’s a flexible infrastructure, plays well with VMware and Hyper-V (and likely others). There are necessary options that aren’t included in the price—the most glaring example is OneView, which costs $400 to $799 per server (as much as $50 per core).
A 256-core fully loaded chassis approaches $300,000—just less than $300 per core—including all I/O fabric communications needed to connect to a communications demarc, and not counting OneView or other licenses—or hypervisors or operating systems.
How We Tested
We installed the C7000 chassis into our rack at Expedient-Indianapolis (formerly nFrame), then connected its power. After the lights dimmed and the grid twitched, we connected the FlexFabric connectors to our ExtremeNetworks Summit Switch core internal routers. The password to the C7000 chassis is hidden inside. We didn’t know this. Remember to get this password because nothing really works until you do.
We bought up a VMware “.ova” appliance version of HP’s OneView 1.1, and with help from HP, brought the chassis online, configured the chassis, and made it part of its own group; up to four chassis can be aggregated as a unit.
We used VMware and Microsoft Hyper-V Windows 2012 R2, one on each blade that was supplied, as an exercise, and checked to see if each hypervisor’s discover process found the items we would supply via changes in the FlexFabric configuration, including IP resources as well as internal Dell Compellent SAN fabric we use. We often had to bury ourselves in technical docs, which are complete but offer few real-world examples, to connect items, but met no obstacles in any of the configuration scenarios we tried.
We thank the personnel of Expedient for their tenacious support of remote-hands work needed to complete test cycles.
Inside the blade chassis, HP has inserted a mezzanine backplane that moves at 7.1Tbps in aggregate. In turn, HP’s FlexFabric connects to backplane components that fan out data in connections of up to 40Gbps per channel, and can be aggregated as control units of four blade servers—conveniently the number that will fit in a 42U rack, provided you’ve pleased the power company and your data center has the correct load bearing floors.
At $122,000, the unit we tested was a monster, sporting two blades plus the FlexFabric connectivity gear.
One point to keep in mind. Once you go the blade chassis route, you’re locked in to HP as your vendor. We could find no one else that sells products for this chassis, so a purchase means a marriage with HP. Of course, the same applies to nearly all blade system designs. HP offers a three-year, across the board warranty once you say the vows.
Testing HP’s C7000 blade chassis was an exercise in both server geometries, and also in hunting down strange configuration data. The base software management options to configure the beast that is the C7000 chassis require enormous amounts of sophistication.
Rather then use the default options, we recommend buying the OneView 1.1 management app. In fact, beg for it if you’re managing several HP blade server chasses.
We like the C7000 chassis and especially the blades, BL-660 Gen8s with a lot of computational power, but it’s the back side of the chassis that packs the most punch with HP’s flexible Virtual Connect option.
Network operations center personnel will enjoy the rapid deployment of the hardware, but there were a few things missing in our opinion when we compared standard and optional administration software packages for the chassis. What’s there works, and well. What’s missing is mildly frustrating.
What We Tested
We tested the C7000 “Platinum Chassis” which, in and of itself, is comparatively inexpensive. The chassis is powered via 208-240v 30amp+ connections, and there are six power supplies; HP sent a Power Distribution Unit (PDU) and we recommend using one for fast power supply change-out should something go awry.
Two full-height HP BL-660 Gen8 blades were fitted to it. The basic blades are not quite $28,000 each (memory not included). Two HP Virtual Connect FlexFabric 20/40Gb F8 modules were installed, each nearly $23,000. Eight full-height blades can be installed, or a combo of 16 half-height blades.
Blades and the Virtual Connect components together, salted with a bit of memory (256GB) made the price climb into six figures. Admittedly, what we received can also be the administrative core for up to four total chassis, which won’t be quite as expensive when the cost is amortized over subsequent chassis and blades. The density can be huge. The chassis fan count—10 of them—was also huge. This density needs to breathe.
The BL-660s Gen 8s blades came with Intel E5-4650 CPUs, 4CPU x 8-core, total 32 cores, 128GB memory (the minimum, 512G possible), and two drives each. We found we could easily boot from SAN or network resources. HP’s VC FlexFabric 20/40 F8 Module was installed into the rear of the C7000 Chassis, along with HP FlexFabric 20Gb 2-port 630FLB Series adapters.
The chassis and components that fill the C7000 are joined together into a mezzanine backplane as mentioned, and blades communicate among each other, or with network communications options installed in the rear of the chassis. The chassis monitors the front and backplanes via both external software that talks to internal chassis firmware, and can be directly controlled through a front-panel color LCD control panel or rear-mounted display.
Provisioning of the chassis components (blades, switches, and their configuration) is done remotely. There are no USB jacks on the front of the chassis and the blades have no jacks. There is one VGA port on the rear, and one USB jack that can mirror the front panel display. We jacked into the back of the chassis with our crash cart, and discovered that the crash cart KVM version of the chassis firmware-driven software doesn’t do much more than the front panel display. But you’ll need one or the other to set the chassis IP address.
Virtual Connect Manager vs OneView
The C7000 is useless without control software. There are two basic choices. HP’s Virtual Connect Manager is included in the cost of the chassis. Also included with all Gen8 servers is HP’s integrated LightsOut (iLO) management.
HP’s OneView might be a better option. It’s a broader management package for Gen8 servers, although it’s not inexpensive.
We recommend those deploying the C7000 use OneView rather than Virtual Connect Manager for several reasons: it’s archaic and requires studious prerequisite study just to install the software onto highly configured Windows 2008 R2 server, needs 6GB of user memory and therefore is notebook unfriendly, and requires much architectural forethought to deploy into an effective control plane for provisioning and managing the options of the C7000 chassis and its components.
By contrast, OneView 1.1 is a virtual machine delivered as an appliance for VMware or HyperV, installs rapidly, and after a fast tutorial, becomes instantly manageable. It’s not inexpensive. The trade-off is in installation hours spent, and manageability. OneView has the ability to discover much infrastructure and when supplied with chassis password components, is able to connect and logically associate components quickly and with surprisingly little fuss.
There is connectivity between both applications and systems management applications such as Microsoft’s SystemCenter, but these were not tested.
To summarize, the default package can be used, but HP recommended OneView, and we concur. That HP doesn’t include it with a substantial purchase of blades frustrates us. Any data center deploying many chassis can’t live without it.
Blades and FlexFabric
The twin BL 660 Gen8 blades that we were sent had 32 cores on four Xeon CPUs. We’ve seen this combination before and it’s fast and solid. Across the chassis with eight of these installed, 256 cores are possible. If a rack supports four full chassis, that’s 1,024 cores per 42U rack, yielding huge density. Old timers will remember when there was one 32bit CPU per tower server.
The connectivity options for FlexFabric are numerous and the fabric is controlled via the configurations set by Virtual Connection Manager or OneView. The HP Virtual Connect FlexFabric-20/40 F8 module we used replaces internal switches in former HP blade chassis. It supports 16- 10G/20GB downlinks to the chassis midplane bus, two 20GB cross connect links, four 40GB uplinks, eight Flexports uplinks, plus a link to the Onboard Administrator module (the chassis firmware app). The Flexports can be either Fibre Channel (2/4/8GB) or Ethernet (10/1GB).
Each blade can have logical network interface card (NIC) connections depending on the blade type, typically two NIC logical connections for a half-height and three for a full-height blade. These attach to the mezzanine plane, and it’s here that IP traffic can be separated internally from iSCSI or FCoIP disk traffic, or in another design alternative, one tenant’s traffic from another, or perhaps a Content Development Network (CDN) from the Hadoopers.
Amusingly, the extreme data rates of the FlexFabric infrastructure also mean cable length concerns—at 40GBps proximity becomes an issue when establishing boundaries between chassis—occur certainly with copper cables but even with fiber.
The density also means that the FlexFabric options chosen become/replace traditional hardware core routers and switches that once performed the tasks among what would have been the huge stack of discrete servers, or aisles of servers in racks and its network switching demarc boundaries.
FlexFabric’s options and construction mandates inter-disciplinary imaginative construction and another reason why OneView trumps Virtual Connection Manager for fabric control, as OneView integrates these at the mezzanine/midplane level more understandably.
We appreciated the multiple views of OneView fabric tracking, and how it relates each element of the infrastructure to each other. It’s not a finished product, as it has difficulty showing sophisticated logical and protocol relationships in and among the objects, and this would help systems engineers understand flows, and network engineers understand the hardware and aggregation relativity more simply.
Whether we wanted it or not, we felt as though we should get a degree from HP after going through the exercise of understanding the relationships that we were constructing.
Testing
Our initial installation went smoothly. The front panel LED provides a lot of information about the chassis, and errors that the firmware picks up, like unplugged cables, and cooling problems. It’s a fast way to get localized information on low-level problems, but it’s not as sophisticated as even a dull smartphone as a user interface. The KVM jack supplied us with a web page, but not that much more control capability. Control comes from Virtual Connect Manager and/or OneView or Insight Manager.
We have nothing that can assault this chassis at full bore. It’s in aggregate, an enormous block of computational and I/O capability. The sum of its parts when viewed discretely however, is powerful—blades whose other Gen8 cousins we’ve tested coupled to a huge L2/L3 switch backplane.
The BL-660 Gen8 blades digested our VMware licenses with glee. The speeds of digestion were comparable to the HP DL-580 Gen8 we recently tested. The flow of data through the blades with VMware’s VNICs was easily controlled via defaults in OneView, then in VMware vCenter with ESXi 5.5. Where we once had difficulty with VMware drivers finding HP hardware correctly, we had no issues this round.
Summary
The C7000 Platinum Chassis, coupled to the HP-supplied BL660 Gen8 blades supplies huge computational density. The FlexFabric approach localizes all systems I/O to a mid-plane, then logically connects multiple blades and chassis together through the fabric.
This architecture replaces discrete or 1U servers, external switch and router cabinets, separate fabric to SAN data stores, and all of the logic needed to glue these pieces together.
Going the blade chassis router, one becomes entirely captive to HP in this infrastructure, but it’s a flexible infrastructure, plays well with VMware and Hyper-V (and likely others). There are necessary options that aren’t included in the price—the most glaring example is OneView, which costs $400 to $799 per server (as much as $50 per core).
A 256-core fully loaded chassis approaches $300,000—just less than $300 per core—including all I/O fabric communications needed to connect to a communications demarc, and not counting OneView or other licenses—or hypervisors or operating systems.
How We Tested
We installed the C7000 chassis into our rack at Expedient-Indianapolis (formerly nFrame), then connected its power. After the lights dimmed and the grid twitched, we connected the FlexFabric connectors to our ExtremeNetworks Summit Switch core internal routers. The password to the C7000 chassis is hidden inside. We didn’t know this. Remember to get this password because nothing really works until you do.
We bought up a VMware “.ova” appliance version of HP’s OneView 1.1, and with help from HP, brought the chassis online, configured the chassis, and made it part of its own group; up to four chassis can be aggregated as a unit.
We used VMware and Microsoft Hyper-V Windows 2012 R2, one on each blade that was supplied, as an exercise, and checked to see if each hypervisor’s discover process found the items we would supply via changes in the FlexFabric configuration, including IP resources as well as internal Dell Compellent SAN fabric we use. We often had to bury ourselves in technical docs, which are complete but offer few real-world examples, to connect items, but met no obstacles in any of the configuration scenarios we tried.
We thank the personnel of Expedient for their tenacious support of remote-hands work needed to complete test cycles.
No comments:
Post a Comment