by clicking on the page. A slider will appear, allowing you to adjust your zoom level. Return to the original size by clicking on the page again.
the page around when zoomed in by dragging it.
the zoom using the slider on the top right.
by clicking on the zoomed-in page.
by entering text in the search field and click on "In This Issue" or "All Issues" to search the current issue or the archive of back issues respectively.
by clicking on thumbnails to select pages, and then press the print button.
this publication and page.
displays a table of sections with thumbnails and descriptions.
displays thumbnails of every page in the issue. Click on a page to jump.
allows you to browse through every available issue.
FCW : July 15, 2013
20 July 15, 2013 FCW.COM Future tech At present, it might seem that facial recognition programs are of interest only to law enforcement and intelli- gence agencies. But as the systems become more robust and effective, other agencies will have to decide whether and how to use them. The technology has broad potential but also threatens to encroach fundamen- tally on privacy. In a sense, the machine learning algorithms for facial recognition are doing something analogous to speech recognition. Just as speech-recognition programs can t try to match sounds against all possible words that might have generated those sounds, the new generation of face-recognition techniques doesn t attempt to match patterns. Instead, the learning meth- odology allows it to dis- cern global structure in a way loosely analogous to human perception. Autonomous vehicles The pace of such prog- ress can perhaps best be seen in the case of autonomous cars. In 2004, DARPA ran a race in which autonomous cars had to navigate a 150-mile desert route. None of the 21 teams nished. The best-per- forming team, from Carnegie Mellon University, traveled a little more than 7 miles. In 2005, ve teams nished DARPA s 132-mile course. Last year, Google announced that about a dozen of its autonomous cars had driven more than 300,000 miles. Suddenly, DARPA s efforts to bring driverless vehicles to the battle eld look a lot closer to reality. Many elements must come together for this to work. Chris Urmson, engi- neering lead for Google s Boss, which won the 2007 DARPA Urban Chal- lenge, said autonomous vehicles com- bine information from many sources. Boss had 17 sensors, including nine lasers, four radar systems and a Glob- al Positioning System device. It had a reliable software architecture broken down into a perception component, mission-planning component and behavioral executive. For autonomous cars to work well, all those elements must perform reliably. But the mind of an autonomous car --- the part that s fundamentally new, as opposed to the chassis or the engine --- consists of algorithms that allow it to learn from its environment, much as speech recognizers learn to recognize words out of vibrations in the air, or facial recognizers nd and match faces in a crowd. The capacity to effectively program algorithms that are capable of learning implicit rules of behavior has made it possible for autonomous cars to get so much more capable so quickly. A 2012 report by consulting rm KPMG predicts that self-driving cars will be sold to the public by 2019. In the meantime, the Transportation Department s Intelligent Transporta- tion Systems Joint Program Of ce is guring out how the widespread deployment of technologies that will enable autonomy will work in the coming years. DOT s effort is focused on deter- mining how to change roads in ways that will enable autonomous vehicles. Besides the technical challenges, it raises a sticky set of liability issues. For instance, if an autonomous car driv- ing on a smart road crashes because of a software glitch, who will be held responsible --- the car s owner, the car s passenger, the automaker or the company that wrote the software for the road? Clearly, autonomy in automobile navigation presents a dif cult set of challenges, but it might be one of the areas in which robots rst see large- scale deployment. That is because although part of what needs to be done (perceiving the environment) is hard, another part (moving around in it) is relatively easy. It is far simpler to program a car to move on wheels than it is to program a machine to walk. Cars also need to process only mini- mal linguistic information, compared to, say, a household robot. Groups such as Peter Stone s at the University of Texas, which won rst place in the 2012 Robot Soccer World Cup, and Yiannis Aloimonos at the University of Maryland are creating robots that can learn. Stone s win- ning team relied on many explicitly encoded rules. However, his group is also working on lines of research that teach robots how to walk faster using machine learning techniques. Stone s robots also use learn- ing to gure out how to best take penalty kicks. Aloimonos is work- ing on an even more ambitious European Union-funded proj- ect called Poeti- con++, which aims to create a robot that can not only manipulate objects such as balls but can also understand language. Much as autonomous vehi- cle teams have created a grammar for driving --- breaking down, say, a U-turn at a busy intersection into its constituent parts --- Aloimonos aims to describe a language for how people move. Having come up with a way to describe the constituent parts of motions, called kinetemes --- for instance rotating a knee or shoulder joint around a given axis --- robots can then learn how to compose them into actions that mimic human behavior. This is all very ambitious, of course. But if machine learning techniques continue to improve in the next ve years as much as they have in the past ve, they will allow computers to become very powerful in fundamen- tally new ways. Autonomous cars will be just the beginning. ■ THE NEW GENERATION OF LEARNING TECHNIQUES HOLDS THE PROMISE OF NOT ONLY BEING ABLE TO MATCH HUMAN PERFORMANCE BUT ALSO TO EXCEED IT.
June 30, 2013
July 30, 2013