GeSiMEx project members Carlos Zednik and Hannes Boelsen will be presenting a paper on "The Exploratory Role of Explainable Artificial Intelligence" at the upcoming meeting of the Philosophy of Science Association!
The paper suggests three possible ways in which recently-developed tools from Explainable AI can be used to drive scientific exploration and discovery, over and above the opaque, ML-driven models to which these tools are applied. First, they can be used to characterize and thus better understand regularities in the data that an ML model has previously identified. Second, they might be used to derive new testable causal hypotheses from the data, by identifying counterfactual inputs to the ML model that is predicted to result in an alternative outcome. Third, these tools can be used to articulate new testable hypotheses in cognitive neuroscience, e.g. by revealing the representations and algorithms that are deployed by deep neural networks in the context of a naturalistic task environment. Thus, these examples show that Explainable AI plays a vital, but so far mostly underappreciated role in scientific discovery and exploration.
Although the PSA meeting seems likely to be postponed or held entirely online, this will be a great opportunity to talk about the way methods and theories developed in AI generalize to cognitive neuroscience.
A preprint version of the paper corresponding to this talk will be uploaded shortly, and will be linked here.