Intelligent Machines Also Need Control

Artificial intelligence research lacks women in leadership positions

For most people, hearing the words mathematics, physics and programming in a single sentence would be reason enough to discretely but swiftly change the subject. Not so for Dr. Marina Höhne, postdoctoral researcher at TU Berlin’s Machine Learning Group led by Professor Klaus-Robert Müller as well as Junior Fellow at the Berlin Institute for the Foundations of Learning and Data (BIFOLD) and passionate mathematician. Since February 2020, the 34-year-old mother of a four-year-old son has been leading her own research group, Understandable Machine Intelligence (UMI Lab), with funding from the Federal Ministry of Education and Research (BMBF). In 2019, the BMBF published the call “Förderung von KI-Nachwuchswissenschaftlerinnen” with the aims of increasing the number of qualified women in AI research in Germany and strengthening the influence of female researchers in this area long term.

“The timing of the call was not ideal for me, as it came more or less right after one year of parental leave,” Höhne recalls. Nevertheless, she went ahead and submitted a detailed research proposal, which was approved. She was awarded two million euros funding over a period of four years, a sum comparable to a prestigious ERC Consolidator Grant. “For me, this represents an unexpected but wonderful opportunity to gain experience organizing and leading research.”

The topic of her research is explainable artificial intelligence. “My team focuses on different aspects of the understanding of AI models and their decisions. A good example of this is image recognition. Although today it is now possible to identify the relevant areas in an image that contribute significantly to an AI system's decision, i.e. whether the image is of a dog or a cat, there is still no method that provides a holistic understanding of an AI model's behavior. However, in order to be able to use AI models reliably in areas such as medicine or autonomous driving, where safety is important, we need transparent models. We need to know how the model behaves before we use it to minimize the risk of its behaving wrongly,” says Höhne outlining her research approach. Among other things, this focuses on the use of so-called Bayesian neural networks to obtain information about the uncertainties of decisions made by an AI system and then present these in a way that is understandable for humans.

AI models in safety-relevant areas need transparency


Marina Höhne studied Technomathematics at the FH Aachen and the Technische Universität Berlin. In 2017, she completed her PhD in machine learning at TU Berlin supervised by Prof. Dr. Klaus-Robert Müller. Since 2020, she heads her own Junior-Researchgroup "Understandable Machine Learning", funded by the German Federal Ministry of Education and Research. She is a Junior Fellow at the Berlin Institute for the Foundations of Learning and Data (BIFOLD) and Associated Professor at the University of Tromsø in Norway.
Marina Höhne is married and mother of a son.

To achieve this, many different AI models are generated, each of which provides decisions based on slightly different weightings. These are then specifically pooled and displayed in a heat map. Applied to image recognition, this means that the pixels of an image that contribute significantly to the decision of what it depicts, cat or dog, are clearly marked. The pixels that are only used by some models in reaching their decision, by contrast, are faintly marked.

“Our findings could prove particularly useful in the area of diagnostics. For example, explanations with a high model certainty could help to identify tissue regions with the highest probability of cancer, speeding up diagnosis. Explanations with high model uncertainties, on the other hand, could be used for AI-based screening applications to reduce the risk of overlooking important information in a diagnostic process,” says Höhne.

Today, Marina Höhne’s team consists of three doctoral researchers and four student assistants. The hiring process presented Höhne, who is now also an associated professor at the University of Tromsøin Norway, with problems of a very particular nature: “My aim is to develop a diverse and heterogeneous team, in part to combat the pronounced gender imbalance in machine learning. My job posting for the three PhD positions received twenty applications, all from men. At first, I was at a loss what to do. Then I posted the jobs on Twitter to reach out to qualified women candidates. I'm still amazed at the response - around 70,000 people read this tweet and it was retweeted many times, so that in the end I had a diverse and qualified pool of applicants to choose from,” Höhne recalls. She finally appointed two women and one man.

Her goal: a heterogeneous and diverse working group

Höhne knows all about how difficult it can still be for women to combine career and family. At the time of her doctoral defense, she was nine-months pregnant and recalls: “I had been wrestling for some time with the decision to either take a break or complete my doctorate. In the end, I decided for the latter.” Her decision proved a good one as she completed her doctorate “summa cum laude” while also increasing her awareness of the issue of gender parity in academia.

Höhne already knew which path she wanted to pursue at the start of her master’s program in Technomathematics. “I was immediately won over by Klaus-Robert Müller’s lecture on machine learning,” she recalls. She began working in the group as a student assistant during her master’s program, making a seamless transition to her doctorate. “I did my doctorate through an industry cooperation with the Otto Bock company, working first in Vienna for two years and then at TU Berlin. One of the areas I focused on was developing an algorithm to make it easier for prosthesis users to adjust quickly and effectively to motion sequences after each new fitting,” says Höhne.  After the enriching experience of working directly with patients, she then returned to basic research at the University. “Understandable artificial intelligence, combined with exciting applications such as medical diagnostics and climate research – that is my passion. When I am seated in front of my programs and formulas, then it’s like I am in a tunnel – I don’t see or hear anything else.”