Confronting The Dark Side of Artificial Intelligence

Apr 22, 2022

With Artificial Intelligence creating a lot of buzz around, it is time to take a step back, wear the devil’s hat and answer an important question: Are we ready to hand over the reins to next-generation machines and robots?

Confronting the Dark Side of Artificial Intelligence


With continuous advancements in technology, Artificial Intelligence (AI) has the power to redefine the boundaries of human imagination. The current focus is more towards introducing automation and reducing time in process-oriented jobs. 


Having said that, are we ready to hand over the reins to next-generation machines and robots? Seeing the ongoing buzz around the future scope of AI, it may be time to wear the devil’s hat and reflect on the risks associated with its use. 


Unlimited Potential

With its ubiquitous potential, the sheer limitless applicability of AI is an intimidating thought. Given the complexity of human nature and a unique personality of each of the 7 billion individuals across the globe, the muscle to influence the masses is in the hands of a very few - those with the pen to write the algorithms behind AI. 


(Source: Artificial Intelligence, US Department of Health & Human Sciences)

Some renowned personalities have also expressed their thoughts:

  • In March 2018, Elon Musk, founder of SpaceX, who had pointed out - “AI is far more dangerous than nukes.” 

  • Stephen Hawking also warned us when he said “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”

What drives these concerns? Let’s try and understand this on the grassroot levels.


Biasedness and Lack of Transparency

  • Lack of Transparency: AI systems are relatively new and therefore, have a reputation for being non-transparent, like a black box. It is more about the incompetence in understanding these systems that inevitably leads to a lack of trust and inability to sufficiently test them. 

  • Poor Decision Making: Being unable to apply judgement and “think” at the same level as humans - may lead to unfair results and poor decision making. For a business entity, this has catastrophic effects from legal, regulatory and reputational risks. 


Discrimination in AI

Considering the vast usage and all-pervasive nature of AI, it is still at a very nascent stage. It is more susceptible to an element of discrimination right in the development process. This could be a result of poor training of the AI system or the input data itself being biased. 

  • Example: Taking the simple example of processing loan applications - where the system has in-built characteristics that result in prejudice against a specific class (relating to gender, race, age). Two equally qualified borrowers could be treated differently because of an inherent bias introduced in the algorithm itself. 


Way Ahead

As we look at the future, there is a transition from ‘pioneering’ to the ‘application’ era of AI. Our overall personalities will integrate with this technology and we will move from smart cities to thinking cities. 

The mammoth task here is in ensuring that the power to transform human processes remains:

  • democratic, 

  • accessible, and 

  • beneficial for all 

Thus, it is imperative to ensure that control does not fall in the hands of the few; that wealth does not determine the ability to benefit from innovation and that human prejudice doesn’t exist in AI systems. 

Ethical AI practices require greater collaboration between industry, governments and technology gurus. While there is no doubt that such a technology will transform human lives as never imagined before, proper safeguards against misuse are vital to be implemented in the early stages. 



Author(s) :
Karan Pawani