by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : June 30, 2016
Also, scaling was a problem because tradi- tional systems were built to support a range of pro- cessing power. As work- loads increased, agencies outgrew the solution, and a new system had to be installed. Virtualization loosened the tight links between system hardware and software. An abstrac- tion layer was added, so changing one element (say, a server) no lon- ger required altering the operating system and application. That devel- opment made it much easier for agencies to add new services. In addition, scaling became simpler. Rather than replace one system with another, agencies can add new servers to the existing mix when- ever more processing power is needed. Premises-based sys- tems were especially hamstrung by traditional system design. Agencies typically had a series of IT development and sup- port teams that each tinkered with a system element: serv- ers, storage or networks. Organizations have been moving to consolidate those elements. The first step was combining server, storage and network systems with converged data center systems. But the first generation of converged systems had limita- tions. They took up a lot of floor space and often were as large as previous-generation hardware. Configuration was simpler than in the past but still required far more than a few mouse clicks. And costs could be high: Converged system costs often approached the million-dollar mark. A new group of start-up hardware suppliers — such as Atlantis Computing, DataCore Software, Gridstore, Nutanix, Pivot3, Scale Computing, SimpliVity, StarWind Software and Tintri — emerged to push converged prod- ucts to the next level. Their hyper-converged systems operate in smaller floor spaces, include higher lev- els of software automation and cost less (sometimes significantly less) than con- verged solutions. “Prices have fallen, so entry-level hyper-con- verged systems are in the $200,000 to $300,000 range,” Butler said. Interest in hyper-con- verged solutions is grow- ing accordingly. Technol- ogy Business Research projects that worldwide sales of hyper-converged platforms will increase at a 50 percent compound annual growth rate from 2015 to 2020. As hyper-converged solutions gain ground, CIOs have another option for moving to a modern data center infrastruc- ture. They must determine whether a public cloud or a hyper-converged solution fits best in their agency. The hurdles Hyper-converged systems present some challenges. They mainly come from start-up suppliers, some of whom have agreements with established suppliers. Eventually, the market is expected to consolidate, although no one is sure how that process will play out. So there is an ele- ment of risk in these purchases: Where will Start-up X be in two years? Five years? The potential benefits also can be overstated. Hyper- converged vendors claim that agencies can get new apps running in minutes, but hours or even days seem more likely, according to Perry. Organizational challenges can be vexing as well. IT staff has been divided into separate product groups, but cloud and hyper-converged systems combine those devices. As a result, agencies need to consolidate and cross- train their employees so they can work with converged solutions. 28 June 30, 2016 FCW.COM ExecTe c h What is HCI? Hyper-converged infrastructure (HCI) is a concept that’s still evolving, so there isn’t yet a canonical definition. It’s a mix of standard servers and clever software that meshes a number of largely identical nodes into a unified pool of computing and storage resources. Those Lego-like building blocks typically have the following attributes: • Very dense x86 servers with front-mounted hot-swap disks. • Local storage, either all flash (usually solid- state drive) or a mix of flash and hard disk drive. • A virtualized server or virtualized machine and storage stack that allows carving each node into logical pieces of computing, RAM and storage suitable for various workloads. • A distributed resource allocation and management system that allows workloads to be easily spread across multiple nodes in a hyper-converged cluster. • Integrated hardware and software sold by a single vendor as an appliance, typically with one to four nodes per box. • Virtual network overlays that can be used to automate network configuration and security for workloads. 0630fcw_027-029.indd 28 6/8/16 9:03 AM
June 15, 2016
July 15, 2016