The Rise of the Machines

Advanced Robotics is emerging with tons of potential advantages for humans in the future but are we considering the other side of it? If we speak more precisely, there are three sides to it. One is positive in terms of regulated efficiency, the other is negative in terms of job losses, and the third one could be considered as a threat to human mankind.

We all have grown up watching robots like Terminator and The Matrix where robotic machines are taking over humans, destroying the cities and the stories they tell are pretty scary.

Such extreme results of AI are horrifying to Human beings. We must aware that one of the greatest minds in billionaire’s tech group called Elon Musk is not on a very optimistic side of AI and its future applications. According to the gentleman “AI is the biggest existential threat to mankind”.

Can we trust AI?

But before moving on to such a larger extent, we need to admit and accept the present reality that robots like PR2 can’t even open a door yet on their own. This scenario might sound very scary but it is still very far from us. What could be more concerning is the Trust factor that Artificial Intelligence has garnered in our lives over the past few years? According to an Oracle AI survey report, 64% of people are ready to trust Robots over their human bosses

Society consists of responsible citizens who has designated roles to fill in. We all know people who are engineers, scientists, Doctors, lawyers, judges, etc. who are trusting the data provided by AI processing confidently. This enormous trust is concerning not because AI gets it wrong more often but how bad the impact could be on getting the analysis or identification wrong.  

Let’s take an example of a Husky and a wolf, where an AI-driven bot has been asked to identify this creature shown in the picture. Now the AI has created an algorithm, and identified this Husky Dog as a Wolf.

Amazingly, it was more interesting when the researchers re-created the picture of the algorithm to see what exactly the robots have noticed in this picture to match with their pre-seeded data-base. And in the next image, you can see it was just snow in the background which had been noticed to make the decision.

Researchers have concluded that the data which was fed to the AI-driven bot was somehow biased with some pre-assumptions. Most of the pictures of wolves were in snow. So eventually the AI predicted based on the presence or absence of snow, not in reference to presence or absence of the wolf. Moreover, the researchers were not aware of this is happening until and unless this had occurred, and they re-wrote the algorithm to fix this.

This becomes more concerning as this is the way AI algorithms works, so does the Deep Learning, Machine Learning, etc. that even the developers who created them have no idea how it’s responding. Geographically if we see, the trust on AI according to a Statista Report:

Image Source

Now coming closer to a real-life example, The Compass Criminal sentencing algorithm had been deployed in 13 states. The main objective was to determine the possibility of committing a crime again by a prisoner after their release. The apparent results as per ProPublica indicating that there is a 77% possibility that a person will commit a crime again if he is a Caucasian. This is a real-life example where judges were deciding of a man’s life based on these assumptions.

The major reason for choosing the Compass by the judges as it was a model of efficiency. It was helping them to pass through loads of cases in a much faster way, in reference to a backlogged criminal justice system. The people who were facing criminal sentencing had questioned about the merit of Compass, it was scrutinized a bit, and however, it turned out to be used appropriately without investigating its source code.

Now dragging towards an example that is more concerned with a bigger audience. What if we tell you that the banks are also using a similar kind of algorithm called Black Box AI. It depends whether you are getting a Home Loan or not, or are you sitting in the job interview or not.    

What can we do?

Are we willing to do something now, this is the major concern as we need to demand standards of accountability, transparency, and recourse in AI systems. ISO, the International Standards Organization, formed a committee in 2017 but to make decisions about what to do for AI standards. And declared 5 years to come up with a standard.

The type of control which is being used in a self-driving car, the machine learning that only being used in research since 2007. These are neo technologies, and hence required standards and a proper set of regulations. We need to demand these along with a slight level of skepticism as well.

In AI we have a dispassionate system that isn’t reflect, which cannot make a decision, there is no recourse. The only they agree to and say is that ‘THE SYSTEM MUST CONTINUE’. We need to understand that we work together with AI, and our lives are rotating around its consequences and outcomes.

According to research on the role of AI in developing sustainable goals, a summary of the positive and negative impact of AI on the various SDGs:

Image Source

How something small and seemingly mundane, can easily grow into something very dangerous.

Here one of the famous examples by Peter Haas during his Tedx Talk show, he mentioned once he was traveling on hilly roads, and driving a car in mountains. It was raining initially, and as he moved up it was started raining. He started skidding, mud had covered his window, and he was scared of getting hit by a vehicle. This example is currently acting in a contrasting situation in terms of AI and its functionalities.

We all are currently driving in rain with AI, and snow will come which could become a Blizzard for us. We all need to check the conditions right at the moment with AI, we need to claim for safety standards and need to address a major question that how far we want to go, and we need to go as well?


The Economic incentive to replace human labour with the help of AI and Automation could be larger than anything else we have witnessed since the industrial revolution. Believe this or not someday AI will replace the fast food corners, and the radiologists in hospitals, Ai will detect cancer, and the robot will do the surgery.

Skepticism will lead to a question of this environment and will allow humans to stay in the game. The wolf and the dog example we had taken earlier could be a good show for transparency. There should always be a scope for humans to step and make it correct.

We need to understand that we work together with AI, and our lives are rotating around its consequences and outcomes. Hence to be safe, we need to lead and not to follow. The prime aim should not make a human like a Robot but to make a robot more like a human to save humankind in the upcoming future.