Whether a hyper-converged approach is right for your organisation will depend greatly on the kinds of workloads that your business requires Credit: Getty More than a decade ago, businesses were promised simplicity, flexibility, and unprecedented compute power by the emergence of cloud computing. Cloud took the enterprise IT landscape by storm, with a vision for radically transforming the way businesses operate, the agility with which they could move, and the scalability with which they could achieve all of the above. Cloud computing didn’t quite usher in the land of milk and honey, however important its role in remoulding the terrain. While it has unquestionably posited improvements in the ways businesses can operate, as well as introducing more flexible day-to-day options, business realities can move at a different pace to technological hype cycles, and often dictate different contradictory terms of engagement. The fact is that although many businesses today are racing to step up their plans for digital transformation, organisations large and small remain languishing in a knot of legacy IT systems that restrict agility and leave them struggling to reap the benefits of modernisation – a Catch 22, because while IT functions may wish to modernise quicker, the cost of ‘keeping the lights on’ diverts resources away from potential areas of investment. Quite different to the earliest promises of the mid-noughties cloud computing evangelisers, for many businesses it’s advantageous – or an outright necessity – to store data and perform workloads locally. This could be for myriad reasons, and for organisations of all stripes – anything from governance through to the costs of running workloads on the cloud versus on-premise. On-premises solutions, though, traditionally require a great deal of upkeep. This can leave organisations feeling that they must choose between a complex, full stack of hardware and the necessary in-house expertise to maintain it, or expensive public cloud licensing options from the ‘big three’ vendors that also run the risk of quickly spiralling out of control. But there are other options at hand that could signal a third way for organisations feeling hamstrung by that dichotomy, or otherwise struggling to modernise. The emergence of hyper-converged infrastructure Hyper-converged systems are unified off-the-shelf solutions which intelligently combine storage, network, and compute with software – allowing organisations that require on-premises to consolidate on space, reduce energy footprint, and experience the fine-tuned but customer-friendly optimisation capabilities more usually associated with the public cloud. Getting any single data centre component perfectly tuned for your organisation’s needs is no simple task, let alone integrating all components functioning at their most efficient capacity. So simplifying day-to-day operations along all those lines will be an appealing sell to many. However, whether a hyper-converged approach is right for your organisation will depend greatly on the kinds of workloads that your business requires. For ultra-high performance tasks that require extremely low latency such as combing through vast Big Data systems or supporting artificial intelligence and machine learning, hyper-converged is unlikely to be your best bet. But they will be a solid solution for many generalists, large or small – and can certainly run enterprise and database applications where extremely low latency is not required, as well as supporting virtualisation initiatives, and application development and testing environments, or other tasks where you’ll be spinning virtual machines up or down. One of the most appealing draws to hyper-converged infrastructure is that it is designed to scale very easily. Visualise HCI as a kind of building block: as organisations evolve and mature, they can easily add capacity by placing more boxes in their stack. That affords great business agility, in that organisations gain visibility into precisely what their resource pool has to offer, and gain a clearer picture of resource allocation or whether new hardware is necessary. Additionally, management is far simpler than the tangle of boxes, wires, and networking equipment than the typical back-office IT function. Staff can easily observe HCI systems via an uncomplicated visual user interface, usually presenting information in a “single pane of glass” for easy monitoring, in order to take stock of performance and resource allocation every step of the way. Even better, with HCI, provisioning resources can be automated to a high degree – providing IT teams with the flexibility to focus on more interesting work building the future of an organisation rather than in juggling support tickets, maintenance, or other laborious, manual tasks – in short, spending their time more usefully elsewhere. Return on Investment (ROI) can be a difficult metric to measure with new technologies, especially when value is viewed in abstract terms – a security solution could be deemed successful until a data breach – but with HCI, there are clear yardsticks for results. Reduced labour time and cheaper energy bills are clear examples. How to invest in HCI To understand if HCI can help your organisation it is worth auditing your existing IT setup; speak to staff to learn where their day-to-day time and effort is going, and note challenges with legacy IT. Map out where finances are being allocated and try to glean how efficiently resources really are performing. Once this organisational picture is blueprinted, you can begin to consider where you’d like to be – and ultimately the finer details on which kind of optimised workloads will help you get there. Technologies like VMware’s vSAN, in combination with the vSphere hypervisor and the vCenter unified management platform, together help organisations do away with the complexity of running individual arrays and other challenges associated with independent purpose-built hardware. By investing in vSAN and Hyperconverge Infrastructure, businesses can focus on delivering what matters most to them, alleviating maintenance fatigue or other operational constraints. Insight Direct is a Dell Titanium Black Partner, which means early access to new technologies and customer roadmaps, plus lightning-fast, personalised support. Related content opinion McDonald's serves up a master class in how not to explain a system outage When McDonald's in March suffered a global outage preventing it from accepting payments, it issued a lengthy statement about the incident that was vague, misleading and yet still allowed many of the technical details to be figured out. By Evan Schuman Apr 01, 2024 7 mins Mobile Payment Data Center Industry opinion Failed unsubscribes could be a clue your data's out of control One of the oldest and most frustrating rules about email spam is that the unsubscribe link never works — all it does is confirm your email address is active. But what if the unsubscribe failure is caused by something far more problematic? By Evan Schuman Jan 15, 2024 3 mins Technology Industry Data Privacy Legal news Microsoft’s data centers are going nuclear A job posting suggests that Microsoft is planning to explore the use of small nuclear reactors for its major data centers. By Jon Gold Sep 25, 2023 3 mins Green IT Data Center news analysis The EU Data Act is a lot bigger than iCloud The EU is taking a stand against vendor lock-in for data services, including IoT, connected device, and cloud services. By Jonny Evans Jun 29, 2023 5 mins Small and Medium Business Apple Cloud Computing Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe