{"id":1026,"date":"2020-01-28T01:47:06","date_gmt":"2020-01-28T01:47:06","guid":{"rendered":"https:\/\/people.utm.my\/azhari\/?p=1026"},"modified":"2020-01-28T01:47:14","modified_gmt":"2020-01-28T01:47:14","slug":"light-in-the-blackbox-ai","status":"publish","type":"post","link":"https:\/\/people.utm.my\/azhari\/2020\/01\/28\/light-in-the-blackbox-ai\/","title":{"rendered":"Light in the Blackbox AI"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/www.t-systems.com\/image\/841972\/5x2\/fc\/1668\/667\/9a01deda3c919ca9d1a391a1eaeefe80\/ws\/im-explainable-ai-jpg.jpg\" alt=\"The Explainable AI research field has a new driver: the European General Data Protection Regulation.\" width=\"598\" height=\"239\" \/><\/p>\n<h2><img loading=\"lazy\" decoding=\"async\" class=\"external-video-preview-img\" src=\"https:\/\/www.t-systems.com\/image\/842116\/16x9\/980\/551\/920876c2f60c9f4259306a94fac7bf81\/TD\/im-explainable-ai-jpg.jpg\" alt=\"Light into the Blackbox AI\" width=\"427\" height=\"240\" \/>&#8220;Explainable AI&#8221; looks into the &#8220;brain of artificial intelligence&#8221; (AI) and can explain how logarithms make their decisions. This is an important step because the new General Data Protection Regulation (GDPR) requires traceability. In an interview, Sven Kr\u00fcger, former Chief Marketing Officer at T-Systems, discusses the link between AI and GDPR.<\/h2>\n<div>\n<h3>AI decisions must be traceable<\/h3>\n<\/div>\n<div>\n<div class=\"t4-paG\">However, the demand for transparency is usually more difficult to meet. What exactly happens during machine learning is often hidden in a black box. Even the programmers are in the dark when it comes to answering the question of how the AI \u200b\u200bmakes its decisions. Which is why, for example, <a title=\"Partnership with Microsoft\" href=\"https:\/\/www.t-systems.com\/de\/en\/about-t-systems\/partner\/digital-workplace\/microsoft-225410\" target=\"_self\" rel=\"noopener noreferrer\" data-rel=\"content.link\">Microsoft<\/a> Research\u2019s Kate Crawford calls for key public institutions in the areas of criminal justice, health, welfare, and education to stop using algorithms. Too many AI programs, according to the expert, have discriminatory tendencies or erroneous assumptions, it was discovered. Machines decide with high consistency, but also consistently inappropriately with unsuitable programming.<\/div>\n<\/div>\n<div>\n<div class=\"t4-paG\">AI is relevant in more and more areas of life. Its importance will continue to grow. It can do many things: make medical diagnoses, buy or sell stocks for us, check our credit history, analyze whole business reports, or select job applicants. Software evaluates us according to certain mathematical criteria using so-called \u201cscoring\u201d methods. Therefore, the GDPR prescribes the \u201cright of explanation\u201d for the protection of every single person. This means: If an affected person submits an application, institutions or companies must be able to reasonably explain an AI decision or risk assessment.<\/div>\n<\/div>\n<div>\n<h3>Machine learning reveals cases of fraud<\/h3>\n<\/div>\n<div>\n<div class=\"t4-paG\">It becomes difficult at this point. \u201cThe legality of decisions can only be examined by those who know and understand the underlying data, sequence of action, and weighting of the decision criteria,\u201d writes legal scientist Mario Martini in JuristenZeitung (JZ). Scientists around the world are working on this explanation. Their research field: explainable artificial intelligence. Or sexier: XAI. Explainable artificial intelligence or explainable <a title=\"&quot;Expect no wonders\u201d.\" href=\"https:\/\/www.t-systems.com\/en\/best-practice\/02-2018\/focus\/forecasting-software\/predictive-policing-project-811032\" target=\"_self\" rel=\"noopener noreferrer\" data-rel=\"content.link\">machine learning<\/a> want to look into the electronic brain. For example, the consulting firm PricewaterhouseCoopers (PwC) places XAI on the list of the ten most important technology trends in the field of artificial intelligence.<\/div>\n<\/div>\n<div>\n<div class=\"t4-paG\">However, the literally enlightening view into the black box is difficult because neural networks have a very complex structure. Decisions are the result of the interaction of thousands of artificial neurons. These are arranged in tens to hundreds of interconnected levels \u2013 with their diverse interconnections, the neural networks of the human brain are modeled. Scientists are now also using the virtual dissecting knife in Berlin: The research group Machine Learning at the Fraunhofer Heinrich Hertz Institute (HHI) has developed a method called <a href=\"https:\/\/www.zeit.de\/digital\/internet\/2017-03\/kuenstliche-intelligenz-black-box-transparenz-fraunhofer-hhi-darpa\" target=\"_blank\" rel=\"noopener noreferrer\" data-rel=\"content.link\">Layer-wise Relevance Propagation<\/a> (LRP). Research Director Wojciech Samek and his team first published their explainable AI method in 2015 and already presented their XAI method at CeBIT.<\/div>\n<\/div>\n<div>\n<div class=\"t4-paG\">LRP traces back the decision process of a neural network: The researchers record which groups of artificial neurons are activated and where \u2013 and what decisions they make. They then determine how much an individual decision has influenced the result.<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;Explainable AI&#8221; looks into the &#8220;brain of artificial intelligence&#8221; (AI) and can explain how logarithms make their decisions. This is an important step because the new General Data Protection Regulation (GDPR) requires traceability. In an interview, Sven Kr\u00fcger, former Chief Marketing Officer at T-Systems, discusses the link between AI and GDPR. AI decisions must be [&hellip;]<\/p>\n","protected":false},"author":14428,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[3],"tags":[],"class_list":["post-1026","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/posts\/1026","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/users\/14428"}],"replies":[{"embeddable":true,"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/comments?post=1026"}],"version-history":[{"count":0,"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/posts\/1026\/revisions"}],"wp:attachment":[{"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/media?parent=1026"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/categories?post=1026"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/people.utm.my\/azhari\/wp-json\/wp\/v2\/tags?post=1026"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}