by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : November and December 2014
6 alternative arises. “ But thanks to the greater mobility possible with clouds, IT managers can move workloads wherever they make the most business sense to run,” Newman says. The most common example of workload mobility is cloud bursting, in which an organization quickly enables additional IT capacity to address demand spikes, such as at the end of quarter or during a holiday season. The organization can then scale down when the crush ends. Now, IT managers are applying this strategy to other areas, including application development and testing activities that span internal and external clouds. Once new code is proven to be reliable and complete, administrators can move it into the produc tion environment. “ We’re also seeing a reverse migration,” Newman says. In some cases, IT managers may opt to run an application in a public cloud, but as the system grows and demands more storage, CPU and memory capacities, the initial cost advantage may disappear, prompting administrators to bring the application in-house. “ One knock against public clouds has been that they’re like Hotel California — you check in, but you can never leave,” he adds. “ Now, there are options for moving workloads back into private environments if it makes business sense to do so. That’s going to continue with all the initiatives around network virtualization and software- defined networks, which further enable organizations to treat the public cloud as an extension of a private cloud.” To see a payoff from this level of flexibility, IT organizations need tools to gather detailed data about the operating cost of each workload based on its relative security, compliance, availability and performance requirements. “ CFOs and CIOs like these capabilities because they can benchmark their organizations according to how their cost of delivery compares to all the other options out there,” Humphreys says. “And it’s not always lowest cost that wins. You’ve got to understand how your service-delivery costs, service-level requirements and the importance of the applications you are delivering all come together.” 4. Take a Fresh Approach to Server Refreshes For years, the industry rule of thumb for server refreshes placed useful life spans for the equipment at approximately three to four years. But now, with optimization a key consideration, IT managers are adopting a more flexible and often accelerated view of when refreshes are best. With new generations of servers and processors arriving in roughly two-year intervals, progressive IT shops may change out hardware sooner, if new designs and calculations of investment returns dictate faster action. Essential factors in the decision process include whether the organization needs advanced engineering to support new goals and initiatives. For example, higher-performance processors enable faster analytics, run streaming video applications more efficiently and increase server consolidation rates to meet energy conservation milestones. Recent adoption of 64-bit operating systems and applications further alter refresh time frames, according to industry analysts. An additional argument for more UNDERSTAND THE MAIN DRIVERS FOR OPTIMIZATION Before IT managers can devise a plan for data center optimization, they must first understand the drivers that spur action. Typically, the push for optimization comes from three main catalysts. TOP-DOWN INITIATIVES. Such efforts generally aim to make IT more cost- efficient and a fuel for innovation. “ Senior executives set an overall strategy for IT, including the attributes and metrics that will define the success of optimization efforts,” says Edward Newman, direc tor of the Cloud and Virtual Data Center service line within EMC Consulting, Global Services. FIELD-OF-OPERATION IMPERATIVES. New mission-oriented initiatives often serve as the catalyst here. For example, an organization may launch a new customer-facing application that requires more responsive IT services. “ In this case, optimization starts in one area of the organization and then expands across the rest of the enterprise,” Newman explains. LOOMING DEADLINES. An impending event that will alter data center operations, such as the expiration of an outsourcing agreement, can trigger an optimization response. Rather than simply re-establishing existing operations, CIOs update their hardware platforms, undergo application rationalization across their portfolio and expand virtualization as a part of the effort. No matter the incentive, none of these imperatives is best addressed as a unilateral move. A team approach is more effective because optimization ac tions in one area — such as storage virtualization or power-conservation efforts — have ripple effects throughout the data center. “ You could have the best data center power usage effectiveness rating in your region, but if that also results in a higher cost per transaction, you haven’t gained anything,” says Greg Schulz, founder of Server and StorageIO. “ Everything has to work together, and that means bringing together people responsible for hardware, software and facilities.” 2 1 3 DATA CENTER 4ab-7ab GSO145298.indd 3 9/23/14 12:07 PM