Dan's Take

HPE Superdome Flex Targets In-Memory Computing

The company claims to turns huge piles of data into actionable insights.

Hewlett Packard Enterprise (HPE) just launched a new Integrity Superdome model that the company says is designed to help enterprises "process and analyze massive amounts of data and turn it into real-time business insights." In this announcement, HPE cited its acquisition of SGI as one of the forces behind design of this new system.

The company points out that enterprises adopting "in-memory databases, such as SAP HANA, Oracle Database In-Memory and Microsoft SQL Server" need a system designed for this type of workload.

HPE says that the Superdome Flex provides a shared pool of memory over an "ultra-fast fabric" that's capable of scaling from 768GB to 48TB in a single system. The system can utilize two Xeon lines: the Platinum Processor 8XXX family, and the Gold Processor 6XXX family. It can scale from 4 to 32 sockets for processors (in 4-socket increments), allowing it to address growing workloads.

The company also claims that the system can provide 5-nines of single-system reliability (99.999 percent) or no more than 26.28 seconds of downtime per month. Other suppliers, such as Stratus, point to 6-nines of availability for some of their systems.

Dan's Take: the System Design Balancing Act
As I pointed out long ago (in 2015), system design is always a balancing act. Vendors carefully consider how targeted workloads consume system resources, such as processing, memory, networking and storage, then design a machine offering enough resources in each category so that when the target workload is imposed on the system, one resource doesn't run out while others still have capacity to do more work.

For example, a technical application might use a great deal of processing and memory, but may not use networking and storage at an equal level. A database application, on the other hand, might use less processing but more memory and storage. A service-oriented architecture application might use a great deal of processing and networking power, but less storage and memory than the other types of workloads. An environment based on virtual machines (VMs) might require a large amount of memory for the VMs, a large number of processors, and a large amount of high-performance storage.

What resources an in-memory database would use is the next question. Clearly, this workload would require the system to support larger-than-normal amounts of system memory, have very fast processor-to-memory busses, large secondary and tertiary processor caches, and have fast storage and networking interfaces so that the huge amounts of data can be quickly shuffled in and out of the system during processing.

  While processing power is important in this type of application, size and performance of the system cache, memory capacity and performance, storage capacity and performance, and, of course, network capacity and performance might rank higher on the system designer's priorities.

Dell, Lenovo and a few other system suppliers also offer configurations that could be tasked with supporting an in-memory database workload. Will HPE's new Integrity Superdome Flex be the best? While it certainly appears to be highly scalable, other systems might be a better choice for some enterprises; depending, of course, on what vendors are currently using in their datacenter, and volume purchasing agreements already in place.

About the Author

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company.

Featured

Subscribe on YouTube