{"id":5739,"date":"2016-10-18T08:45:55","date_gmt":"2016-10-18T00:45:55","guid":{"rendered":"http:\/\/people.utm.my\/haslinasarkan\/?p=5739"},"modified":"2016-10-14T08:50:43","modified_gmt":"2016-10-14T00:50:43","slug":"why-ai-makes-it-hard-to-prove-that-self-driving-cars-are-safe","status":"publish","type":"post","link":"https:\/\/people.utm.my\/haslinasarkan\/why-ai-makes-it-hard-to-prove-that-self-driving-cars-are-safe\/","title":{"rendered":"Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe"},"content":{"rendered":"<p>This article is taken from\u00a0here.<\/p>\n<p>By <span class=\"author-name\">Andrew Silver<\/span><\/p>\n<div>Posted <label>7 Oct 2016<\/label><\/div>\n<div><\/div>\n<div>\n<p>Car manufacturers will have difficulty demonstrating just how safe\u00a0self-driving vehicles are because of what\u2019s at\u00a0the\u00a0core of their smarts: machine learning.<\/p>\n<p>\u201cYou can\u2019t just assume this stuff is going to work,\u201d says Phillip Koopman, a computer scientist at Carnegie Mellon University who works in the automotive industry.<\/p>\n<\/div>\n<p><img fetchpriority=\"high\" decoding=\"async\" class=\"aligncenter size-medium wp-image-5740\" src=\"https:\/\/people.utm.my\/haslinasarkan\/files\/2016\/10\/MjgxNzUwMQ-300x225.jpeg\" alt=\"mjgxnzuwmq\" width=\"300\" height=\"225\" srcset=\"https:\/\/people.utm.my\/haslinasarkan\/wp-content\/uploads\/sites\/617\/2016\/10\/MjgxNzUwMQ-300x225.jpeg 300w, https:\/\/people.utm.my\/haslinasarkan\/wp-content\/uploads\/sites\/617\/2016\/10\/MjgxNzUwMQ.jpeg 640w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p style=\"text-align: center\"><em>Photo: David Paul Morris\/Bloomberg\/Getty Images<\/em><\/p>\n<p style=\"text-align: center\"><em>A member of the media test drives a Tesla Motors Model S equipped with Autopilot in Palo Alto, Calif., last fall.<\/em><\/p>\n<p style=\"text-align: center\">\n<p>Car manufacturers will have difficulty demonstrating just how safe\u00a0self-driving vehicles are because of what\u2019s at\u00a0the\u00a0core of their smarts: machine learning.<\/p>\n<p>\u201cYou can\u2019t just assume this stuff is going to work,\u201d says Phillip Koopman, a computer scientist at Carnegie Mellon University who works in the automotive industry.<\/p>\n<p>In 2014, a market research firm projected\u00a0that the self-driving car market will be worth $87 billion by 2030. Several companies, including\u00a0Google,\u00a0Tesla, and\u00a0Uber, are experimenting with computer-assisted or fully autonomous driving projects\u2014with varying success because of the myriad technical obstacles that must be overcome.<\/p>\n<p>Koopman is one of several researchers who\u00a0believe that the nature of machine learning makes verifying that\u00a0these autonomous vehicles will operate safely very challenging.<\/p>\n<p>Traditionally, he says, engineers write computer code to meet requirements and then perform tests to check that it met them.<\/p>\n<p>But with machine learning, which lets a computer grasp complexity\u2014for example, processing images taken at different hours of the day, yet still identifying important objects in a scene like\u00a0crosswalks and stop signs\u2014the process is not so straightforward. According to Koopman,\u00a0\u201cThe [difficult thing about] machine learning is that you don\u2019t know how to write the requirements.\u201d<\/p>\n<p>Years ago, engineers realized that\u00a0analyzing images from cameras is a problem that can\u2019t be solved by traditional software. They turned to machine learning algorithms, which process examples to create mathematical models for solving specific tasks.<\/p>\n<p>Engineers provide many human-annotated examples\u2014say, what a stop sign is,\u00a0and what isn\u2019t a stop sign. An algorithm strips down the images, picking unique features and building a model. When a computer is subsequently presented with\u00a0new images, it can run them through the trained model and get its predictions regarding which images contain a stop sign and which ones don\u2019t.<\/p>\n<p>\u201cThis is an inherent risk and failure mode of inductive learning,\u201d Koopman says.\u00a0If you look inside the model to see what it does, all you get are statistical numbers. It\u2019s a black box. You don\u2019t know exactly what it\u2019s learning, he says.<\/p>\n<p>To make things more concrete, imagine if you test drive your self-driving car and want it to learn how to avoid pedestrians. So you have people in orange safety shirts stand around and you\u00a0let the car loose. It might be training to recognize hands, arms, and legs\u2014or maybe it\u2019s training to recognize an orange shirt.<\/p>\n<p>Or, more subtly, imagine that you\u2019ve conducted the training during the\u00a0summer, and nobody wore a hat. And the first hat the self-driving car sees on the streets freaks it out.<\/p>\n<p>\u201cThere\u2019s an infinite number of things,\u201d that the algorithm might be training on, he says.<\/p>\n<p>Google researchers once tried identifying dumbbells with an artificial neural network, a common machine learning model that mimics the neurons in the brain and their connections. Surprisingly, the trained model could identify dumbbells in\u00a0images only when an\u00a0arm was attached.<\/p>\n<p>Other problems with safety verification, Koopman says, include training and testing the algorithm too much on similar data; it\u2019s like memorizing flash cards and regurgitating the information on an exam.<\/p>\n<p>If Uber dropped its self-driving cars in a random\u00a0city,\u00a0he says,\u00a0where it hasn\u2019t exhaustively honed computer maps, then maybe they wouldn\u2019t work as well as expected. There\u2019s an easy fix: If you only train and only operate in downtown Pittsburgh (which Uber\u00a0has mapped), then that could be okay, but it\u2019s a limitation to be aware of.<\/p>\n<p>There\u2019s also the challenge of ensuring that\u00a0small changes in what the system perceives\u2014perhaps because of\u00a0fog, dust, or mist\u2014don\u2019t affect what algorithms identify. Research\u00a0conducted in\u00a02013\u00a0found that changing individual pixels in an image, invisible to the unaided eye, can trick a machine learning algorithm into thinking a schoolbus is not a schoolbus.<\/p>\n<p>\u201cYou would never put such [a machine learning] algorithm into a plane because then you cannot prove the system is correct,\u201d says Matthieu Roy, a software dependability engineer at the National Center for Scientific Research in Toulouse, France, who has worked in both the automotive and avionics industries. If an airplane does not meet independent safety tests, it cannot take off or land, he says.<\/p>\n<p>Roy says it would be too difficult to test autonomous cars for all the scenarios they could experience (think of an explosion or a plane crashing right in front).\u00a0\u201cBut you have to cope with all the risks that may arrive,\u201d he says.<\/p>\n<p>Alessia Knauss, a software engineering postdoc at the Chalmers University of Technology in G\u00f6teborg, Sweden, is working on a study to determine the best tests for autonomous vehicles. \u201cIt\u2019s all so costly,\u201d she says.<\/p>\n<p>She\u2019s currently interviewing auto companies to get their perspectives. She says that even if there were multiple sensors\u2014such as in Google\u2019s cars\u2014that\u00a0act as backups, each component has to be tested based on what it does, and so do\u00a0all of the systems that make use of it.<\/p>\n<p>\u201cWe\u2019ll see how much we can contribute,\u201d Knauss says.<\/p>\n<p>Koopman wants automakers to demonstrate to an independent agency why they believe their systems are safe. \u201cI\u2019m not so keen to take their word for it,\u201d he says.<\/p>\n<p>In particular, he wants car companies to explain the features of the algorithms, the representativeness of the training and testing data for different scenarios, and, ultimately, why their simulations are safe for the environments the vehicle is supposed to work in. If an engineering team simulated driving a car 10 billion miles without any hiccups, although the car didn\u2019t see everything, a company could explain that any other scenarios wouldn\u2019t happen very often.<\/p>\n<p>\u201cEvery other industry that does mission critical software has independent checks and balances,\u201d he says.<\/p>\n<p>Last month, the U.S.\u00a0National Highway Traffic Safety Administration unveiled guidelines for autonomous cars, but they make independent safety testing\u00a0optional.<\/p>\n<p>Koopman says that with company deadlines and cost targets, sometimes safety corners can be cut, such as during the 1986 NASA Challenger accident, where ignoring risk led to a spacecraft exploding 73 seconds after liftoff and killing seven astronauts.<\/p>\n<p>It\u2019s possible to have independent safety checks without publicly disclosing how the algorithms work, he says. The aviation industry has engineering representatives who work inside aviation companies; it\u2019s standard practice to have them sign\u00a0nondisclosure agreements.<\/p>\n<p>\u201cI\u2019m not telling them how to do it, but there should be some transparency,\u201d says Koopman.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This article is taken from\u00a0here. By Andrew Silver Posted 7 Oct 2016 Car manufacturers will have difficulty demonstrating just how safe\u00a0self-driving vehicles are because of what\u2019s at\u00a0the\u00a0core of their smarts: machine learning. \u201cYou can\u2019t just assume this stuff is going to work,\u201d says Phillip Koopman, a computer scientist at Carnegie Mellon University who works in [&hellip;]<\/p>\n","protected":false},"author":6477,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30],"tags":[],"class_list":["post-5739","post","type-post","status-publish","format-standard","hentry","category-research-blog"],"_links":{"self":[{"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/posts\/5739","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/users\/6477"}],"replies":[{"embeddable":true,"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/comments?post=5739"}],"version-history":[{"count":0,"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/posts\/5739\/revisions"}],"wp:attachment":[{"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/media?parent=5739"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/categories?post=5739"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/people.utm.my\/haslinasarkan\/wp-json\/wp\/v2\/tags?post=5739"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}