by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : July 15, 2013
July 15, 2013 FCW.COM 17 In the past decade, computer sci- entists have made remarkable progress in creating algorithms capable of discerning informa- tion in unstructured data. In con- trolled settings, computer programs are now able to recognize faces in photographs and transcribe spoken speech. Cars laden with laser sensors can create 3-D representations of the world and use those representations to navigate safely through chaotic, unpredictable traf c. In the coming decade, improve- ments in computational power and techniques will allow programs such as voice and face rec- ognition to work in increasingly robust set- tings. Those technolog- ical developments will affect broad swathes of the American economy and have the potential to fundamentally alter the routines of our daily lives. There is not one single reason for these improvements. Vari- ous approaches have proven effective and have improved over the years. However, many of the best-per- forming algorithms share a common trait: They have not been explicitly programmed by humans. As David Stavens, a Stanford Uni- versity computer scientist, wrote about Junior, an autonomous car that earned Stanford second place in the Defense Advanced Research Projects Agency s 2007 Urban Challenge: "Our work does not rely on manual engi- neering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling." In a wide range of examples, tech- niques that rely on self-supervised learning have leapfrogged tradi- tional computer science approaches that relied on explicitly crafted rules. Supervised learning --- in which an algorithm is rst trained on a large set of data that has been annotated by a human and then is let loose on other, unstructured data --- is effective in cases where an algorithm bene ts from some initial structure. In both cases, the rules the computer ultimate- ly used were never explicitly coded and cannot be succinctly described. Such so-called machine learning algorithms have a long history. But for much of that history, they were more interesting for their theoretical prom- ise than on the basis of real-world per- formance. That has changed in the past few years for a variety of reasons. Chief among them are the availability of large datasets with which to train learning algorithms and cheap compu- tational power that can do such train- ing quickly. Just as important, though, are developments in methodology that make it possible to use that data --- millions of images tagged online by, say, Flickr users, or linguistic data stretching to the billions of words --- in advantageous ways. The new generation of learning techniques holds the promise of not only being able to match human per- formance in tasks that have hereto- fore been impossible for computers but also to exceed it. Speech recognition The market for speech recognition is huge and will only grow as the tech- nology improves. Call centers alone account for tens of billions of dollars in annual corporate expenditure, and the mobile telephony market is also worth billions. Nuance, the company behind Apple s Siri voice-recognition engine, announced in November 2012 that it is working with handset manu- facturers on a telephone that could be controlled by voice alone. Forbes reports that Americans spend about $437 billion annually on cars and buy 15.1 million automobiles each year. According to the General Services Administration s latest tally, federal agencies own nearly 660,000 vehicles. As technologies for autonomy improve, many and eventually most of those cars will have detectors and software that will enable them to drive autonomous- ly, which means the potential market is enormous. The impact of image- analysis technologies such as facial recognition will also be transformative. Government use of such technologies is already widespread, and commercial use will increase as capabilities do. Video sur- veillance software is already a $650 million annual market, accord- ing to a June report by IMS Research. Just as the commercial stakes for those and other applications of machine learning are high, so too are the broader questions the new capa- bilities raise. How does the nature of privacy change when it becomes possible not only to record audio and video on a mass scale but also to reli- ably extract data --- such as people s identities or transcripts of their con- versations --- from those recordings? The dif cult nature of the questions means they have largely escaped pub- lic discussion, even as the debate over National Security Agency surveillance programs has increased in recent weeks following Edward Snowden s disclosures. Li Deng, a principal researcher at GETTY IMAGES
June 30, 2013
July 30, 2013