by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : July 15, 2013
18 July 15, 2013 FCW.COM Future tech Microsoft Research, wrote in a paper in the May issue of IEEE Transactions on Audio, Speech and Language Pro- cessing that there are no applications today for which automated speech recognition works as well as a person. But machine learning techniques, he said, "show great promise to advance the state of the art." There are many machine learn- ing techniques, but the differences between them are largely technical. What the techniques have in common is that they consist of a large set of nodes that connect with one another and make interrelated decisions about how to behave. Those complicated networks can "learn" how to discern patterns by fol- lowing rules that modify the way in which a given node reacts to stimuli from other nodes. It can be done in a way that simply seeks out patterns without any human-crafted prompt- ing (in unsupervised learning) or by trying to duplicate example patterns (in supervised learning). For instance, a neural network might be shown many pairs of pho- tographs along with information about when a pair included two photographs of the same person, or it might be played many audio recordings that are paired with transcriptions of those recordings. Deep neural networks have, since 2006, become far more effective. A shallow neural network might have only one hidden layer of nodes that could learn how to behave. That layer might consist of thousands of nodes, but it would still be a single layer. Deep networks have many layers, which allow them to recognize far more complex patterns because there is a much larger number of potential ways in which a given number of nodes can interconnect. But that complexity has a down- side. For decades, deep networks, though theoretically powerful, didn t work well in practice. Training them was computationally intractable. But in 2006, Geoffrey Hinton, a computer science professor at the University of Toronto, published a paper wide- ly described as a breakthrough. He devised a way to train deep networks one layer at a time, which allowed them to perform in the real world. In late May, Google researchers Vincent Vanhoucke, Matthieu Devin and Georg Heigold presented a paper at the IEEE International Conference on Acoustics, Speech and Signal Pro- cessing describing the application of deep networks to speech recognition. The Google researchers ran a three- layer system with 640 nodes in each layer. They trained the system on 3,000 hours of recorded English speech and then tested it on 27,327 utterances. In the best performance of a number of different con gurations they tried, the system s word error rate was 12.8 per- cent. That means it got slightly more than one word in 10 wrong. There is still a long way to go, but training a network as complicated as this one would have been a non-starter just a few years ago. Nevertheless, speech-recognition technologies have already had a dra- matic impact on call and contact centers. As the technology improves, agencies that interact with the pub- lic on a massive scale --- such as the Social Security Administration, the National Park Service and the Vet- erans Health Administration --- will have to decide to what extent they wish to replace human operators with automated voice-recognition systems. Facial recognition On June 17, Stanford associate pro- fessor Andrew Ng and his colleagues presented a paper at the annual Inter- national Conference on Machine Learning describing how even larger networks --- systems with as many as 11 billion parameters --- can be trained in a matter of days on a cluster of 16 commercial servers. They do not yet know, they say, how to effectively
June 30, 2013
July 30, 2013