So we start backtracking, along the way repeating our own double-speak until we get too accustomed to its reverberation. We break left and right, scale up and out, converge and disaggregate. I’m reminded of certain book publishers who have produced editions with “=”" and="" “harnessing”="" in="" their="" titles"=""> – a strategy so successful, it’s been repeated. The case has been made that hyperconverged infrastructure (HCI) has become an outmoded category, in an era where containerized, distributed applications are becoming the principal model for new software development. By contrast, data center resource disaggregation (DRD) advocates for physically clustering storage pools together and physically clustering compute capacities together, both as separate resource pools. But rather than light a bonfire for HCI, maybe we should first put our observations about both HCI and DRD to a more scientific test – a survival-of-the-fittest examination. The reigning champion versus the upstart challenger. Status Report is an attempt to analyze what we think we know qualitatively about technology topics, in hopes that the effort may reveal something conclusive about the ways they influence our lives and work. For each subject, we look at ten categories of influence, rating each one’s progressive and regressive potentials separately on positive-10-point and negative-10-point scales. Then on a 2D Cartesian chart, we give each category its own compass direction, or vector, plot the position of each influence point on that vector, and compute the geometric average location of all points. The distance of that average point from dead center is our final influence score: a measure not only of how much influence a subject may have, but also its direction. For this edition, we’ll compare the two technologies of HCI and DRD. Which has the greatest influence? The biggest potential for change? The hottest temperature of hype?
LEARN MORE
What is hyperconvergence, or HCI, or dHCI today? Why it’s all worth knowing
Executive summary
This series typically begins with conclusions, which shouldn’t be too weird for anyone who’s already grown accustomed to opposite trends coexisting within the same product category.
Point #1: Why fault HCI for changing course if the new course is a good one?
Hyperconvergence, as it was originally conceived, is dead. The product category that bears its name, or just its “H,” doubles as a life support system for its old ideals and a transplant mechanism for its underlying methods. Old-style HCI would have vendors continue to build servers that dole out bite-sized portions of processing and storage onto little plates, and then virtually combine all the plates together. Containerization kicked a giant dent in the HCI market, leaving a visible mark: the “d” in “dHCI.” Annually, and almost semi-annually now, vendors find themselves redefining HCI – it’s happened three, four, five times, and one has most notably given up. VMware – ostensibly a virtualization software company – has officially declared HCI and software-defined storage (SDS) synonymous. Perhaps there’s a viable strategy to all this. Rather than just applying other, lucrative technologies as HCI life-support systems, vendors such as VMware and Nutanix are migrating their customers to truly viable platforms with long-term staying power by picking up the HCI carrot and moving it the other direction. “The other direction” is disaggregation. DRD appears to give the customer more choice about which vendors operate in what capacities in their own data centers, smashing one big barrier that would reinforce vendor lock-in.
Point #2: Maybe if we replace horizontal lock-in with vertical lock-in, no one will notice
The semiconductor industry has chosen an evolutionary course we call “disaggregation,” but only because we’re looking at the trend in terms of the effect rather than the cause. Every processor maker is building more components that perform discrete functions (e.g., SmartNICs, DPUs, IPUs, NPUs, GPUs), or that can be semi-permanently programmed (for instance, like an FPGA) to do so. These producers are building processors for the needs of their customers, who are component and device builders. System builders assemble components and devices from these parts, rather than build pre-integrated servers that require software to supply them with their functionality. Hyperconvergence implies there’s still a measurable market interest in integrating the classes of components for which processor makers are building discrete parts, and that this interest can be maintained. There is evidence to back up this theory – you can continue to base a market around an idea. Yet it’s very hard to build a new business in that market, if the idea is all you have. With a mix of old and new products, and software marketed as hardware, the remaining vendors in the HCI space have this plan: Working against HCI, however, is the fact that, for quite some time, no two vendors have implemented the same framework. It means something different for every vendor, which means you can’t really compare apples to apples. A disaggregated systems market has yet to manifest itself – and certainly not in the enterprise. And that’s a problem, because the idea behind it is harder to articulate than you might think. The potential for such a market is measurable. The special processors that disaggregated components will require are being fabricated now. The compelling reason for there to be disaggregated components in a market is because it will enable more market participants than just Intel, AMD, and “Other.” But the people who truly comprise any market are its consumers, and they have yet to be addressed, let alone convinced. Until the consumer enters this market, the channel will watch from the sidelines.
Point #3: Converged infrastructure is harder to scale than anyone anticipated
The real incentive for promoting the HCI model is certainly obvious: It establishes the dependencies that keep vendor relationships with customers flourishing. However, there was a flaw in HCI’s scalability argument from the beginning, and it was plain as day – at least to folks like legendary network engineer Randy Bias. Originally, resources such as compute, memory, and storage capacity were scaled up together, by way of a converged component that VMware, Dell EMC, Cisco, and Intel called a “vBlock.” When a workload required more memory, you added vBlocks; when a database needed more storage, you added more vBlocks. Bias, a Dell EMC veteran, saw the proverbial elephant in the ointment immediately. Inevitably, a business will either over-provision or under-scale to avoid over-provisioning. Disaggregation seems the obvious solution. This is where we find an even more proverbial dinosaur smothering the elephant: You can make processor clusters, you can make storage arrays, and you already have network hubs – but there is no way yet to sever memory (DRAM) from the bus that links it to processors. There is an active effort to bring this about, with a technology called Compute Express Link championed by memory maker Rambus. But CXL is not ready for prime time. Until it is, even DRD will have to carry an invisible asterisk.
LEARN MORE
Best hyperconverged infrastructure systems vendors 2021
Sphere of influence
Now, let’s look at each relevant component of market influence. Again, we’re comparing the positive and negative aspects of both HCI and DRD, and weighing their net values against each other.
Stakeholder empowerment
What’s the easiest message to build a product marketing plan around: bringing difficult things together, or keeping naturally separate things apart? Hyperconvergence has a natural attraction. Easily, the most critical need among data center managers remains a radical simplification of the platform. Immediately, the concept of HCI invokes this promise. Here is HCI’s Achilles’ heel: Any effectively managed data center requires its commodities to be planned for independently. It’s one thing to have a cohesive management strategy, but another entirely to presume that every workload (software) has a fixed set of resource requirements for each deployment. And yet here’s the switch-up: The more complex arrangement of resources in a data center may be the most efficient, but it’s the more difficult of the two to sell. There isn’t an easy pitch for DRD. Sure, it could revolutionize rack-scale architectures, but is revolutionizing rack-scale architectures something that facilities managers actually want to do? It’s harder to build a cohesive architecture and/or a marketing message around compartmentalization – a topic which conjures images of silos, separate budgets, and no fun.
Competitive advantage
DRD may not have a fully formed marketing theme just yet, but it does have one key element in its favor: Its benefits are demonstrable and measurable. They’re not self-evident yet, and they’re difficult to explain, but like quantum mechanics, DRD’s benefits are observable. The supercomputing (HPC) field is already the most obvious case-in-point. There, processors are already clustered, storage is already networked, and management can still be centralized. The success of supercomputing stands as a testament to the power of disaggregated architectures. HCI is, first and foremost, a way to achieve bundling at the scale of enterprise servers. It cuts a path for customers to traverse when making purchasing decisions, and if it’s not so much a trench as it was in the 2000s, it’s still something of a groove. And some customers are still set in that groove. The problem now, with HCI’s formerly purported benefits demonstrable by other means (including Kubernetes), is getting new customers interested in following what’s become merely a groove.
Business sustainability
Are HCI and DRD strong enough ideals upon which to stake the future of entire business divisions? Some vendors are clearly backing away from all-in investments in HCI, and one vendor – NetApp – has already backed out, and has done so vocally and honestly. A well-managed Kubernetes cluster, says NetApp, can accomplish the same goal as HCI. Indeed, HCI as we knew it – as recently as three years ago – is a dead product. But DRD has its own challenges. Like HCI at its inception, DRD would have data centers replace their existing setups with hardware that requires an entirely different mindset for its management and upkeep. What’s more, the class of DRD hardware we’re being promised (e.g., processor cluster boxes, network fabric-connected memory pools with Compute Express Link – CXL) are just promises for the future (like speed for 5G). Intel’s recent embrace of the idea of dividing compute tasks into classes, including with new classes of processor such as the “IPU,” is being treated as validation of disaggregation’s inevitability. But the uncertainty is palpable: No one really knows yet where this road leads.
Evolutionary incentive
“Incentive” can come in two forms: an enticement or a threat. Both DRD and HCI are in an “evolve-or-die” pattern. But if DRD evolves into, say, a standard framework for data center component deployment and management, such a framework would fuel the growth of both incumbents and startups anxious to offer alternatives to big-budget hyperscaling. By contrast, HCI is incentivized to evolve new marketing themes, along with some new, perhaps ad hoc, bundling arrangements, whose objectives are mainly to sustain the HCI value proposition on life support for another year.
Market enablement
As we’ve stated in this series before, there’s a huge difference between enabling one’s business and enabling the market in which one does business. (See: Docker, Inc.) For a market segment I’ve personally called “dead” on more than one occasion, it does seem to be registering a pulse. Its growth is very marginal, but it’s not a flatline – at least, not according to market research firm IDC, which continues to track the growth of dead market segments.
Customer value
The core of the rot, if you will, for HCI lay here, with the question of whether the technology provides anything uniquely of value. The argument against HCI is compelling and twofold: 1) Much, if not all, of what hyperconvergence originally set out to accomplish, is now covered by Kubernetes; 2) Buying into one vendor’s line, or one product family, doesn’t really provide functional value over and above leaving the vendor decision open. The potential for customers perceiving measurable value in DRD is tremendous. If it all works, the whole public cloud workload migration argument could be upended, and continued hyperscale facilities development may be stopped cold. If it doesn’t work, however – if the performance gains, for instance, are negligible – then there’s a good chance customers could perceive DRD as a grand market re-verticalization. There would be a compute vendor, a storage vendor, a network vendor, an infrastructure vendor, an AI or ML vendor, a graphics vendor – each in its own space, and all of them comfortable in the knowledge that their respective segments are all assured of built-in vendor lock-in.
Economic contribution
Hyperconvergence is not hyperscaling. The latter is a pattern around which data centers can be constructed, and then scaled. Hyperscaling associates capacity with space. It implements formulas which lead to economies of scale. If hyperconvergence and disaggregation fulfill their delegated roles in the data center, then they would accomplish essentially the same thing: matching capacity dynamically to workload. This makes the native economic contributions of both disciplines somewhat positive. DRD may have an edge for this reason alone: It promotes a model where workload management and data center management work together in concert (at least in this sense, a “convergence”), which is an evolutionary process rather than an effort to sustain a declining business model. Working against DRD, however, is a time-proven principle of technology in the enterprise: Businesses do not collectively abandon platforms until beyond their points of collapse. (This is why ransomware actually works, if you think about it.) They will happily invest in the equivalents of duct tape and baling wire, for as long as they can prop up the systems they have, if in so doing they can sustain their current work patterns. HCI has an established base. But because it may be more advantageous to vendors in the long term to move that base to disaggregated platforms, we’re seeing trends like the “d” prefix in “dHCI.”
Societal integration
What HCI could still contribute to society at large is a keener and more comprehensive means of managing data centers. That’s not nearly the same potential as DRD. Today, hyperscale data centers are built around patterns where rooms are subdivided into racks, racks into servers, and servers into blocks of electricity consumption. There’s a broad, fuzzy idea of how much power a rack or an aisle should consume, based on 40,000-foot observations of consumption patterns – for instance, the industries to which their tenants belong (for example, healthcare tends to consume more energy than financial). DRD has the potential of moving resources into their own rooms, which would trigger a complete rethink of how capacity planning would work. That, in turn, could have impacts on the entire industry, including where on the planet facilities are built, and whether smaller facilities can be constructed closer to the world’s major interconnection points – which tend to reside near coastlines.
Cultural advancement
Those societal impacts may impact the health and well-being of peoples throughout the world. As with 5G MEC, DRD could see the rise of smaller cities and metropolitan areas becoming capable of hosting chunks of the broader computing cloud. The energy consumption patterns for power stations could change. Employment situations could change. These are serious societal impacts, with which HCI can’t possibly compete.
Ecosystemic enablement
By definition, an ecosystem is comprised of an industry, the presence and participation of all of whose members serve to everyone’s benefit. You can’t build a true ecosystem around a property whose value is determined by exclusivity. HCI disables ecosystems, but that has always been by design. If DRD evolves to incorporate some kind of open interfacing and collaborative framework, a kind of ecosystem could develop around it. But that’s not what’s happening today, especially with different manufacturers producing DPUs, IPUs, NPUs, and SmartNICs – all of which mainly have the same function of managing connectivity.
Final score: HCI [-1.22], DRD [+1.06]
While HCI ends up with one of the more negative scores we’ve ever posted, it would be significantly lower were it not for one saving grace: In terms of revenue, HCI remains a viable market. Conceivably, DRD’s mostly positive scores could have been much higher, if its vendors’ key value proposition were to be proven in the market: that bundling could achieve the same market goals as convergence, if the software appears to tie it all together. We’re actually past the crossroads for the HCI market participants, and we’ve already seen the transition between convergence and disaggregation begin. This final pair of graphs reveals why: DRD’s momentum is the opposite direction of HCI. The trick they have to pull off this year is convincing their already-signed HCI customers that the opposite direction will lead them toward the same goal.