Tackling the fear of unknown, with and within AI
Published on April 20, 2020 --- 0 min read
By Shalini Kurapati

Tackling the fear of unknown, with and within AI

Share this article

The world is searching for urgent answers to overcome what seems like the most complex challenge our generation has faced in living memory.

Although solutions to mitigate the imminent human suffering have been rolled out with varying success, the path forward after this paradigm shifting pandemic has more questions than answers. Whilst the best of experts are scrambling for answers it is hard not to experience the unsettling fear of the unknown.

Inevitably, during this quest for answers, many have started looking critically into how technological solutions can show the way, with a spotlight on Artificial Intelligence.

The crisis related AI applications have largely focussed on diagnosis, patient self-triage, drug & vaccine discovery and disease spread monitoring & control.

Although the extent to which AI has helped the current crisis is up for debate, it is becoming clear that AI will play a crucial role in shaping the post crisis world beyond just healthcare applications.

Analysts and business consulting firms believe that this crisis will accelerate AI adoption in almost all business sectors and that AI will take a centre stage in the business and operating models of companies. Governments have already begun using data-driven policy making and will double down on that with the power of AI.

If we are going to depend on AI to this extent to help us tackle the fear of the current and future unknowns, we need to be extremely cognisant of the unknowns surrounding AI.

The more powerful AI technologies, such as deep learning are blackboxes, which means we cannot interpret their decisions because of their design and operational complexity. The blackbox problem leads to a number of issues when it comes to AI adoption. The most important being: trust! Even in companies where AI can bump productivity and profits, one of the biggest obstacles for AI adoption is the lack of trust in the AI decisions.

Probably the most well known issues attributed to AI ‘blackboxes’ are ethics and fairness. There have been several unfortunate examples of bias and discrimination caused by these ‘blackbox’ AI models. The topic of ethics in AI is so complex and delicate that some of the world’s best researchers and practitioners are working on multi-disciplinary themes to come up with much debated guidelines and best-practices. The evolving and complex regulation around AI adds another dimension to the already complex issue of deploying ‘fair’ and explainable AI models.

Equally important and a closely related issue is the ‘fair’ use of data. AI is data intensive, raising many data governance, management and privacy concerns.

The robustness and security of AI models are other key issues to consider given the risk of adversarial attacks, carefully planned manipulations to fool or confuse AI models.

On the other hand, AI is a powerful technology that has the potential to advance both businesses and society and it will play an important role in doing so. It gives us unprecedented power to compute the endless possibilities by making sense of the enormous data at our disposal to inform our decision-making about future pathways. That is exactly why we should think critically about tackling the weaknesses of AI in addition to glorifying its strengths.

If AI has to remain a “force for the good” during this crisis and beyond, we should acknowledge the ‘fear of the unknown’ related to the technology itself and work towards building and deploying robust and ‘fair’ AI models that we, human beings, can trust and control.

Tags:

blogpost
Picture of Shalini Kurapati
Dr. Shalini Kurapati is the co-founder and CEO of Clearbox AI. Watch this space for more updates and news on solutions for deploying responsible, robust and trustworthy AI models.