Tractable Probabilistic Logic Models: A New Deep Explainable Representation

Co-PI: Eric Ragan

Sponsor: DARPA

Start Date: September 6, 2018

End Date: May 31, 2021

Amount: $530,000


Machine learning and artificial intelligence algorithms can assist human decision making and analysis tasks. While such technology shows promise, willingness to use and rely on intelligent systems depends on whether people can trust and understand algorithms and how they work. To address this issue, researchers explore the use of explainable interfaces that attempt to help explain why or how a system produced the output for a given input. This DARPA-funded project is studying new explainable systems for fake news detection as well as for automatically detecting actions and objects in videos – but that also help explain to people why the computer’s results were produced from the given input. No algorithms are perfect, and the ability for people to understand the machine’s logic will improve the ability to correctly identify and fix problems with the algorithms. The research also requires studying how well people can understand different visual designs to explain the algorithms. We explore how explanations affect the user’s perception of the algorithm’s accuracy and reliability.