by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : May 30, 2014
38 FEATURE | NETWORK REFRESH "By reducing the number of devices and layers that must be managed, organizations can also reduce costs while improving availability and reliability in the infrastructure," says Jason Nolet, vice president, data center switching and routing, at Brocade Communications Systems. is may mean a move beyond traditional architectures created for client-server applications. In this model, clients access apps and servers running inside the data center in a classic "north-south" information flow, where communication ran into and out of data centers. Newer traffic patterns exhibit greater "east-west" flows, where large volumes of traffic move within a data center from server to server and server to storage system. For example, a transaction may access a web server, then a database server, and pass through a middleware server, creating high volumes of interprocess communication among various components. " e classic three-tier, hierarchical network doesn't facilitate optimized performance for the east-west traffic patterns that are quickly becoming dominant in the data center," Nolet says. What are the alternatives? A number of networking manufacturers advocate flattened networks that shrink layers down from three to two --- or even one --- to remove areas where performance latency and bottlenecks develop. " e more complicated a network is, the more opportunities there are for things to break," says Tina Herrera, director of campus marketing for Juniper Networks. "Opportunities to collapse tiers and manage single points of configuration allow network administrators to increase reliability and use a central area for consistently delivering technology and security updates across the entire network." One of Juniper's techniques for flattening local area networks includes the use of Virtual Chassis technology for wired networks and virtual controller clustering in wireless networks. " is allows organizations to collapse tiers so the network administrators can more easily identify new traffic patterns and adapt to those patterns appropriately," Herrera says. Brocade offers Ethernet fabric technology, which Nolet says can be deployed within existing multitiered network designs. "Our Ethernet fabric appears to the rest of the environment as a simple Layer 2 switch, so all of the legacy constructs around the Spanning Tree protocol and interoperability are supported by the fabric," Nolet says. " is gives network managers the topology freedom to deploy a fabric in a two- or one-tier architecture and minimize the number of hops that occur for traffic going from server to server or server to storage." SOMETIMES FLATTER IS BETTER As IT teams work to optimize network performance, many are adopting a strategy of simplifying the overall environment. FLATTENING DATA CENTER NETWORKS Data center networks have traditionally followed a three-tier model: core, distribution and edge. Top-of-rack switches cement this arrangement into place at the server edge of the network. To simplify system complexity and add speed, some network managers try to flatten networks down to two tiers when 10 Gigabit Ethernet or speedier connections are in place. Here are some of the benefits and challenges of flattening networks: Goal ree-tier Network Two-tier Network Simplicity e more devices on the network, the more management it will require. With fewer devices, this topology requires less management. Low latency More hops lead to higher latency. Fewer hops result in lower latency. High bandwidth and reduced oversubscription ratios Interswitch links can be a bottleneck because of oversubscription. Often, lower- cost links (one or more 1 Gig-E) are used. Less oversubscription occurs because there are fewer devices (and links between them). But higher-speed links are required for increased bandwidth, which increases costs. Efficient use of power and space More devices drive up power consumption and footprint. Cabling is simplified across multiple devices. Fewer devices need less power and less space. Cabling requires careful planning to manage densities. Scalability It is easy to scale up --- just add edge switches. It's not very scalable. When the distribution is full, adding one more device requires a major redesign.
May 15, 2014
June 30, 2014