In recent years, deep learning (DL) has become one of the most promising approaches to artificial intelligence. This class of brain-inspired machine learning architectures and techniques allows computational models that are composed of multiple processing layers to automatically learn representations of data with multiple levels of abstraction from raw data. DL enables, for example, feedforward or recurrent deep neural networks (DNNs) to approximate any continuous function or dynamical system with arbitrary precision in a computational efficient way, ultimately rendering them very promising tools, with unprecedented task performances across many application domains in business, government, and science.
In one of our future papers, we focus on the increasing use and evaluation of DNNs as models in science. Brain and cognitive scientists, for example, use them as explanatory models of brain functions like visual object recognition in primates and, at the same time, evaluate them as such. What initially sounds unproblematic turns out to be based on a questionable assumption, namely that these tools can straightforwardly be evaluated as scientific models. The reason for this is the following: scientists should not evaluate something as a model, if they are not justified in conceiving of it as a model. It will be argued that the latter is indeed the case with some DNNs in science. But what is a scientific model at all?
In the absence of a widely accepted account of the nature of models, we will characterize scientific models as functional entities that are carriers of scientific knowledge. In other words: something is a model, if it functions as such. In the literature, one can find two main functional characterizations of what models are, namely the instantial and the representational conception of models. The instantial conception takes anything to be a model that satisfies a theory, whereas the representational conception conceives of models as things that are used to represent target systems (e.g., parts of the world). We will argue that some DNNs that are supposed to be models cannot fulfil either of these roles as they are not interpretable due to the black box problem. In these cases, scientists are not justified in conceiving of them as models, because the DNNs in question do not function as such for them. Consequently, scientists should not evaluate them as such. But why is this even important?
Well, to mistakenly conceive of something as a model is not just a semantic problem, but also a problem for further inquiry, as this could have a negative impact on asking the right research questions at the right time. To give you an example, it seems premature to ask how well a DNN represents its supposed target system, as long as we fail to establish a representational relationship between them due to the non-interpretability of the DNN. Why is that so? Following Weisberg, who takes a model to be an interpreted structure, an interpretation sets up relations of denotation between the DNN and the target system and gives us fidelity criteria for evaluating the goodness of fit between the model and the target system. So, if scientists are not able to interpret a DNN, they are, among other things, not able to set up relations of denotation between the DNN and its supposed target system. Accordingly, the DNN in question cannot function as a representation of its supposed target system; and since such a DNN is not functioning as intended, namely as a representational model of, for example, brain functions, scientists are not justified in conceiving of it as such. Then, why should we evaluate it as such? Why should we waste our time with questions concerning the goodness fit between a model and its target system, when the construction of the model in question is not even finished? Now you might ask yourself: is there a way to aid or avoid these shortcomings?
We think there is. Recently developed analytical techniques from explainable AI (XAI) (e.g., diagnostic classification, feature-detector identification, input heatmapping) could help us to make the DNNs in question interpretable. In this sense, XAI is not only a solution to the black box problem, but is becoming an important tool in the model building process itself. It is XAI that makes some DNNs scientific models in the first place.