by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : November and December 2015
IT resources—equipment, software, real estate and man hours—continue to sprawl in an attempt to keep pace with demand. For federal agencies, this never- ending data center creep has crowded innovative development out of budgets. Agencies struggle with increasing operations and maintenance costs as the demand for storage and processing power expands. Over the years, going back to the Reagan administration, the government has made repeated attempts to consolidate its data centers to reduce footprint, curb redundancy, trim excess capacity and untangle complexity. Despite sustained efforts over the last few years, the current Federal Data Center Consolidation Initiative has barely made a dent, as agencies have been struggling to get an accurate count of the centers they have and determine which should be closed. Agencies have recently shifted their emphasis to data center optimization, using virtualization and smaller form factor servers to improve efficiency. Such efforts have reduced costs, but haven’t yielded substantially more available budget for innovation. The problem is that we keep trying to address the sprawl with the same technologies that got us here in the first place. The anxiety each fiscal year is around how we are going to purchase more of the same stuff ? We have more data, therefore we must need more storage. We need more processing power, therefore we must need more servers and networks. But why try to solve the problem with more of the same? What if federal CIOs and IT shops radically changed the way they address their needs for storage, networking, and compute capacity? Such a solution does in fact exist. In fact, some 170 federal programs are already supported by it. It is already radically reducing data center costs while improving mission support, security and manageability. Come Together This solution is hyperconverged infrastructure – the combination of servers, storage, and storage networks into a single appliance. The virtualization revolution dramatically optimized industry-standard servers, enabling similarly dramatic server and data center consolidation. However, in order for virtualization to effectively perform its magic, it required massive amounts of redundant storage and networking capacity. That traditional 3-tier architecture is as inefficient and unsustainable as pre-virtualization data centers were. Yet storage and compute requirements keep multiplying, driven by mobility, big data and the Internet of things. Enter hyperconverged infrastructure. Allow us to deconstruct. Traditional storage architecture leverages a three (or more) tier hierarchical subsystem, accessed by servers via a network—itself composed of an array of switching devices. By moving the storage intelligence into software and running that software directly on the servers in a hyperconverged infrastructure, the once-inefficient and proprietary storage area network (SAN) is eliminated. Instead, standard top-of-rack switches are used to connect the environment as a cluster of resources. This model appears as a standard 3-tier environment to a hypervisor, but the underlying architecture is radically simpler, with exponentially fewer areas to troubleshoot and monitor, and significantly smaller in overall rack space. This eliminates the need to perpetuate legacy 3-tier architectures for virtualized environments. Instead, this brings software-defined storage to virtualized environments. “Physically it’s much simpler,” says Jason Langone, director of OCONUS and Tactical Programs for Nutanix. “What hyperconvergence has done is moved the logic of the shared storage array— the deduplication, data compression, replication—everything you expect in enterprise storage, and put that in software that runs directly on the servers.” Nutanix hyperconverged infrastructure uses the company’s own hypervisor for storage and evolves virtualization by an order of magnitude. Therefore, ExEcutivE insights: hypErconvErgEd infrastructurE Hyperconvergence Simplifies the Data Center Virtualizing servers, applications and the network reduces equipment and management costs