Klevis Hysenlikaj

WHAT IS VIRTUALIZATION?

The importance and applications of virtualization extend far beyond virtual machines

No advance in information technology in the past six decades has offered a greater range of quantifiable benefits than has virtualization. Many IT professionals think of virtualization in terms of virtual machines (VM) and their associated hypervisors and operating-system implementations, but that only skims the surface. An increasingly broad set of virtualization technologies, capabilities, strategies and possibilities are redefining major elements of IT in organizations everywhere.

Virtualization definition

Examining the definition of virtualization in a broader context, we define virtualization as the art and science of making the function of an object or resource simulated or emulated in software identical to that of the corresponding physically realized object. In other words, we use an abstraction to make software look and behave like hardware, with corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and performance, and in a broad range of applications. Virtualization, then, makes “real” that which is not, applying the flexibility and convenience of software-based capabilities and services as a transparent substitute for the same realized in hardware.

Virtual machines

Virtual machines trace their roots back to a small number of mainframes from the 1960s, most notably the IBM 360/67, and became an established essential in the mainframe world during the 1970s. And with the introduction of Intel’s 386 in 1985, VMs took up residence in the microprocessors at the heart of personal computers. Contemporary VMs, implemented in microprocessors with the requisite hardware support and with the aid of both hypervisors and OS-level implementations, are essential to the productivity of computation everywhere, most importantly capturing machine cycles that would otherwise be lost in today’s highly-capable 3-plus GHz processors.

VMs also provide additional security, integrity, and convenience, and with very little computational overhead. Moreover, we can also extend the concept (and implementation) of VMs to include emulators for interpreters like the Java Virtual Machine, and even full simulators. Running Windows under MacOS? Simple. Commodore 64 code on your modern Windows PC? No problem.

What’s most important here is that the software running within VMs have no knowledge of that fact – even a guest OS otherwise designed to run on bare metal thinks its “hardware” platform is exactly that. Herein lies the most important element of virtualization itself: an incarnation of the “black box” approach to the implementation of information systems that relies on the isolation enabled by APIs and protocols. Think of this in the same context as the famous Turing Test of machine intelligence – applications, which are, after all, the reason we implement IT infrastructure of any form in the first place – are none the wiser about exactly where they’re running. And they don’t need to be, enhancing flexibility, lowering costs and maximizing IT RoI in the bargain.

We can in fact trace the roots of virtualization to the era of timesharing, which also began to appear around the late 1960s. While mainframes certainly weren’t portable, the rapidly increasing quality and availability of dial-up and leased telephone lines and advancing modem technology enabled a virtual presence of the mainframe in the form of a (typically dumb alphanumeric) terminal. Virtual machine, indeed: This model of computing led – via advances in both the technology and economics of microprocessors – directly to the personal computers of the 1980s, with local computation in addition to the dial-up communications that eventually evolved into the LAN and ultimately into today’s transparent, continuous access to the Internet.

Virtual memory

Also evolving rapidly in the 1960s was the concept of virtual memory, arguably just as important as virtual machines. The mainframe era featured remarkably expensive magnetic-core memory, and mainframes with more than even a single megabyte of memory were rare until well into the 1970s. Virtual memory is enabled by, as is the case with VMs, relatively small additions to a machine’s hardware and instruction set to enable portions of storage, usually called segments and/or pages, to be written out to secondary storage, and for the memory addresses within these blocks to be dynamically translated as they are paged back in from disk. Voilà – a single real megabyte of core memory on an IBM 360/67, for example, could support the full 24-bit address space (16 MB) enabled in the machine’s architecture – and, properly implemented, each virtual machine could in addition have its own full complement of virtual memory. As a consequence of these innovations, still hard at work today, hardware otherwise designed to run a single program or operating system could be shared among users even with simultaneous multiple operating systems and massive memory requirements beyond the real capacity provisioned. As with VMs, the benefits are numerous: user and application isolation, enhanced security and integrity, and, again, much improved RoI. Sounding familiar yet?

Virtual desktop infrastructure (VDI)

After virtual machines and virtual memory, and the availability of these capabilities in low-cost microprocessors and PCs, the next advance was the virtualization of the desktop and thus the availability of applications, both single-user and collaborative. Again, we must return to the timesharing model introduced above, but in this case we are emulating the desktop of a PC on a server and essentially remoting the graphics and other user-interface elements over a network connection, to an appropriate software client and often to an inexpensive and easy-to-manage-and-secure thin-client device. Every major operating system today supports this capability in some form, with a broad array of add-on hardware and software products as well, including VDI, the X Windows system, and the very popular (and free) VNC.

Virtual storage

The next major advance, expanding rapidly today, is the virtualization of processors, storage, and applications into the cloud as well, provisioning whatever capabilities might be required at present and then easily adding to and scaling up the arsenal with essentially no effort on the part of IT staff. Savings in physical space, capital expense, maintenance, downtime due to outages, the labor-intensive expense of troubleshooting (hopefully) infrequent but serious performance issues and outages, and many additional costs can essentially pay for service-based solutions resident in the cloud. Storage virtualization, just for example, offers numerous opportunities here. While disk drives can be virtualized as RAM disks, virtual drives mapped into network-based storage, and even integrated as single-level storage hierarchies that stretch back to IBM’s System/38 of almost 40 years ago, we believe that cloud-based implementations of not just backup, but also primary, storage will become more common as both wired and wireless networks provide a performance floor of 1 Gbps – a capability already common in Ethernet, 802.11ac Wi-Fi, and one of the very definitions of upcoming 5G deployments.

Virtual networks

And speaking of networks, even these are being virtualized to an increasing degree, with network as a service (NaaS) now a viable and even desirable option in many cases. This trend will accelerate with the continuing adoption of network functions virtualization (NFV), which is at least initially of greatest interest to carriers and operators, especially in the cellular space. Significantly, network virtualization creates a real opportunity for carriers to expand their range of services, augment capacity, and increase their value and even stickiness for organizational customers. It is also likely that, over the next few years, an increasing number of end-user organizations will apply NFV in their own networks and even in hybrid carrier/organizational networks (again, note the stickiness factor here). In the meantime, VLANs (802.1Q) and virtual private networks (VPN) add their own broad set of benefits to the many applications and benefits of contemporary virtualization.

Virtualization cost savings

Even with the broad array of powerful and capable virtualization technologies available, it’s ultimately the economics of broad-scale functional virtualization that seals the deal. The competitive nature of the rapidly-evolving cloud-based services business model means that the traditional, labor-intensive operating expense incurred by customer organizations will likely decline over time, as service providers ride their own experience curves, develop new multi-client economies of scale, and offer lower prices to end-user organizations simply as a result of marketplace competition.

It’s also easy to increase reliability and resilience by employing multiple cloud-services suppliers on a fully redundant or hot-standby basis, eliminating the possibility of single points of failure. We see the capital-expense elements of IT budgets increasingly evolving into operating expense, this time being spent on service providers rather than more equipment, facilities, and local staff. Thanks again to the power of today’s microprocessors, advances in system and solution architecture, and, again, the dramatic improvement in the performance of both LANs and WANs (including wireless), almost every element of IT today can indeed be virtualized and even implemented as on-demand, scalable, cloud-based services.

While it has often been described as such, virtualization itself is not a paradigm shift. What virtualization does, in any of its forms, is to enable IT activities across a very broad range of requirements and opportunities – as we have discussed here – to be performed more flexibly, efficiently, conveniently and productively. Based on the strategy of virtualizing much of IT into cloud-based services, virtualization is best thought of today as an alternative operating model with economic advantages that obviate the need for traditional implementations.

This increasing virtualization of organizational IT is in fact guaranteed thanks to an essential, and, again, fundamentally economic inversion of the operational model of IT tracing its roots back to the beginning of commercial computing. In the early days of computing, our interests were necessarily focused on expensive and often-oversubscribed hardware elements, like mainframes, the intrinsic costs of which motivated the initial forays into virtualization noted above. As hardware became cheaper, more powerful, more cost-effective, and quasi-standardized, the focus shifted to applications running in essentially-standardized and virtualized environments, from PCs to browsers.

The net result of this evolution is where we’ve arrived today. Whereas computers and computing used to be at the core of IT, we have shifted to a focus on information and making that information available anytime and anywhere. This “infocentricity” is the ethic and overall motivation that has driven the evolution of the mobile and wireless era itself – getting the information end-users need into (quite literally) their hands, wherever and whenever such is required.

So, what started as a way to make more efficient use of a slow, expensive mainframe has evolved into what is now well on its way to becoming the dominant strategy for defining the future of IT itself. No innovation in IT has had a greater impact than virtualization – and, with the shift to infrastructure virtualized in the cloud, we’re really just getting started.

Leave a Comment