Data Science-ish

View Original

Complex systems don't necessarily need more black boxes

In the book Prediction Machines, the authors break down the anatomy of a decision task, noting that the following components must come together for an AI system to make accurate predictions: input data, model training, and feedback data. Furthermore, for the AI system to be useful, there must be clear, agreed-upon actions that follow from a prediction, based on judgments about risk

Figure 7-1 from Prediction Machines

Perception tasks are a clear example where all these ingredients come together, and this is why we now have AI systems that can usefully diagnose certain eye diseases, for example. 

In contrast, in predicting "social outcomes" or the outcomes of poorly-understood processes, such as recidivism, hospital readmissions, and sepsis, the appropriate actions to take can be very unclear, and any two experts could make different conclusions about the costs and benefits of a given action. 

Generating useful predictions in this kind of context is still very much an open problem. The best we have are "good habits" and rules of thumb. Perhaps one rule of thumb we should add to the list is: "the most accurate models [in a complex social context] aren't necessarily the most un-interpretable/complicated ones … the standard should be changed from the assumption that interpretable models do not exist, to the assumption that they do, until proven otherwise".

This is the argument made in an article in the latest issue of the Harvard Data Science Review. As part of the Explainable AI Challenge at NeurIPS 2018, the authors built an interpretable model for credit scoring, and found that the difference in accuracy between that model and a black box model was negligible. As they say, "The false dichotomy between the accurate black box and the not-so accurate transparent model has gone too far."