Learning machines, like complex Deep Neural Networks (DNNs), are employed in critical infrastructures such as the medical or financial domains. These models affect human lives and therefore humans must be able to inspect them thoroughly.
Unfortunately, there is currently a trade-off between the complexity of a neural model and the ability of humans to explain or interpret its decisions or representations. This thesis contributes to efforts that counter the opaqueness of DNNs, in particular in the language domain.
Download @TU Berlin