“Explainable AI” looks into the “brain of artificial intelligence” (AI) and can explain how logarithms make their decisions. This is an important step because the new General Data Protection Regulation (GDPR) requires traceability. In an interview, Sven Krüger, former Chief Marketing Officer at T-Systems, discusses the link between AI and GDPR.
AI decisions must be traceable
However, the demand for transparency is usually more difficult to meet. What exactly happens during machine learning is often hidden in a black box. Even the programmers are in the dark when it comes to answering the question of how the AI makes its decisions. Which is why, for example, Microsoft Research’s Kate Crawford calls for key public institutions in the areas of criminal justice, health, welfare, and education to stop using algorithms. Too many AI programs, according to the expert, have discriminatory tendencies or erroneous assumptions, it was discovered. Machines decide with high consistency, but also consistently inappropriately with unsuitable programming.
AI is relevant in more and more areas of life. Its importance will continue to grow. It can do many things: make medical diagnoses, buy or sell stocks for us, check our credit history, analyze whole business reports, or select job applicants. Software evaluates us according to certain mathematical criteria using so-called “scoring” methods. Therefore, the GDPR prescribes the “right of explanation” for the protection of every single person. This means: If an affected person submits an application, institutions or companies must be able to reasonably explain an AI decision or risk assessment.
Machine learning reveals cases of fraud
It becomes difficult at this point. “The legality of decisions can only be examined by those who know and understand the underlying data, sequence of action, and weighting of the decision criteria,” writes legal scientist Mario Martini in JuristenZeitung (JZ). Scientists around the world are working on this explanation. Their research field: explainable artificial intelligence. Or sexier: XAI. Explainable artificial intelligence or explainable machine learning want to look into the electronic brain. For example, the consulting firm PricewaterhouseCoopers (PwC) places XAI on the list of the ten most important technology trends in the field of artificial intelligence.
However, the literally enlightening view into the black box is difficult because neural networks have a very complex structure. Decisions are the result of the interaction of thousands of artificial neurons. These are arranged in tens to hundreds of interconnected levels – with their diverse interconnections, the neural networks of the human brain are modeled. Scientists are now also using the virtual dissecting knife in Berlin: The research group Machine Learning at the Fraunhofer Heinrich Hertz Institute (HHI) has developed a method called Layer-wise Relevance Propagation (LRP). Research Director Wojciech Samek and his team first published their explainable AI method in 2015 and already presented their XAI method at CeBIT.
LRP traces back the decision process of a neural network: The researchers record which groups of artificial neurons are activated and where – and what decisions they make. They then determine how much an individual decision has influenced the result.
Recent Comments