I was listening to the always profound and informative Artificial Intelligence Podcast by Lex Fridman.
His guest David Ferrucci answered the usual question: “What is intelligence?” question, with a convincing dichotomy. The first type of intelligence is being able to predict the future, based on the minimal amount of data points. Machine Learning already does that, and Ferrucci achieved this brilliantly working on IBM Watson. His second type of intelligence is though provoking: it’s the ability to guide other people through the reasoning that led to an insight. It’s about communication and convincing others of our intelligence. As you can guess it’s an even more challenging problem. Some call that “Interpretable AI”.
Ferruci calls the first type “savant”. The second “Intelligent”.
I had this example coming while listening. Don’t know if it helps other understand the different between the two, but here is it.
You can teach a dog to find a specific matter, based on training and rewards. This is what we do with deep learning in particular: huge datasets (smells to learn), and rewards (a treat). In that regard the dog is smart, it’s savant.
However, while the dog is using its capacity, while it is sniffing around, it is not super good at explaining why a particular path or place is important. In that the dog is not intelligent. He is not intelligible.
It’s my first time sharing thoughts on AI. If you don’t do it yet, I really encourage you to listen to the AI podcast.