Human beings across time have shared one important characteristic: they use tools to improve what they can achieve.
AI can be one such tool, and it can work well, provided we remember it is a tool. As a tool it must be put in the hands of a human, who can use appropriately and intentionally, for achieving the goals they have.
Humans and Tools
But what is with this fascination humans have with tools?
We are always looking for ways to do something better by adopting means that amplify our energies and our skills. The nature of problems we deal with, and the tools we use to solve them changed over time. Our ancestors were very keen on opening coconuts and found the appropriate rocks for helping with that. We polished up and sat in offices trying to break open “conceptual nuts”. And what are the tools we use for that? They vary. We take a notepad and do some sketches, we work on the whiteboard, and, more and more, we employ the support of software. Now, software is a tool I particularly love. I am a technician at heart and I spent most of my life, since childhood, being marvelled by software. I understand this is not the case with everyone but bear with me, while we see why software is a special tool.
First of all, is software a tool for the mind? I think so because we can use it to support our thinking process. It helps us by doing calculations, by helping visualizing data, by supporting manipulation of information at a scale that we are unable to do by hand. Fact is that we ended up becoming cyborgs without even realizing it: many of the activities we perform on a daily basis are supported by software, which become an extension of ourselves. Try staying away from screens for a couple of working days and let me know if you feel whole, able to perform at your typical level of productivity.
Knowledge Tools
So, sure, software is one among our mental tools but what makes it special? I think it is the fact that it packages knowledge. The way we package that knowledge typically is by having someone with the knowledge we want to capture speak to a professional “knowledge-to-software packager”, also known as a developer. Suppose you want to write a software to calculate taxes: you need to put a tax expert and a developer in a room and have them talk, before the software can be created. There are alternatives, such as the usage of Domain Specific Languages: high-level languages that experts of a certain field (e.g., tax experts or medical experts) can use directly, without the supervision of developers. They work and work well but only for representing the knowledge that the experts are able to express, to explain, to formalize. It is true for some problems, it is not true for others.
There are problems which are very complex and explaining them clearly is not possible. Because of this it is not possible to capture that knowledge in software using the traditional, algorithmic approach. What we can do then?
AI as a Tool
We can use AI. AI can learn by watching us doing our thing over and over. Imagine having a great painter giving us millions of photos of panoramas, and the corresponding paints he created while looking at those panoramas. With that information eventually an AI system could learn how to produce a painting, in the exact style of the painter, given a photo of a panorama. That is: the skills of a great human being captured into a software, by the magic of AI.
It sounds wonderful and we have all grown up with our favorite wizard in mind: being it Merlin, Gandalf, or, (sigh) Harry Potter.
One thing that abundant reading of fantasy literature taught me is that magic is cool, yes, but not reliable, no, not at all. It tends to work unpredictably at the worst possible moment. Even the experts seem to have trouble figuring out what is going on when this happens. Magic is difficult to tame.
What can go wrong when AI is treated as Magic?
- Some results you get are… surprising
- You get some good results but you are not sure why, so you do not feel like using AI for anything that is too important
- You are not able to verify a result provided by AI
- You would like to evolve your AI system with the thing you learn over time, refining it as you go, just you are not sure how
- You would like to be able to monitor your system, to ensure it works as expected. If something goes wrong, you want to be at least able to realize that it is happening!
What we do at ClearBox AI is substantially giving you tools to tame the magic, so that you can be put in control. I know that this means renouncing some of the fascinating unpredictability of magic, but all considered, the advantages should compensate for that loss.
Making AI less magic
What does it means concretely controlling the magic of AI?
- It means having an explanation of why a suggestion was made by the AI system. I am not talking about cryptic formulas involving quantiles, I am talking about stuff we can all understand
- It means being able to have the system continuously learning from new cases, and improving over time
- It means being able to monitor how the system is performing
Besides the specificities there is a simple goal we are trying to achieve: make sure that AI stays a tool. A useful, special, and perhaps a bit magic tool, but nevertheless a tool. Something a human can use to achieve their goals. Something they can use confidently. Something humans feel in control of.
You know, it has always been about putting better tools in the hands of human beings and then sitting back, to see what incredible things humans can accomplish, when they are supported by the right tools.