“I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.” —Claude Shannon
According to John McCarthy, the Father of AI, “He believed that AI consisted of making a machine that would actually replicate human intelligence”. The term AI was coined in 1956, but AI has become more popular today due to increased data volumes, advanced algorithms, and enhancements in computing power and storage.
Early AI research within the 1950s explored topics like problem solving and symbolic methods. The US Department of Defence in the 1960s took interest during this kind of work and commenced training computers to mimic basic human reasoning. For example, the Defence Advanced Research Projects Agency (DARPA) completed street mapping projects within the 1970s and DARPA produced intelligent personal assistants in 2003, long before Cortana, Siri or Alexa were household names. This early work laid the foundation for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems which will be designed to complement and augment human abilities.
AI is accomplished by studying how human brain thinks, and the way humans learn, decide and work while trying to unravel a problem, then using the outcomes of this study as an essence of developing intelligent software and systems.[1]
As humans are nearing an age of everything AI, laws are set to govern the AI machines and even the justice delivery systems might be governed by the AI machines. In order to do so, we’d be faced with a choice to give them the status of a ‘Person’. It may fairly be said that, with the passage of time, the test of personality has focused more on behaviour than on appearance, and more on mental traits than on physical ones. Western thinking has come to disfavour tests of personality based on " status " (ownership of property, religious affiliation ) or "structure" (gender, race); those tests were invoked, defeated, almost every time a new group was added to the roster of persons.
The initial question is therefore whether there could exist circumstances such that a decision-maker could examine the behaviour of a computer system and decide that the machine had crossed the threshold of computer personality and has become a legal personality. It seems nearly inexorable that the issue will arise in our increasingly computerized society; science fiction literature abounds with proposed factual scenarios in which that legal issue could be presented. The real question is whether current or any predictable oncoming law provides a plausible foundation for a determination of computer personality.
Would the same laws that govern us, humans, also apply to the Artificial Intelligent machines or would it be the three laws of robotics as stated by Isaac Asimov :
“1) A robot might not injure a person's being or, through inaction, allow a person's being to come to harm.
2) A robot must obey orders given thereto by human beings except where such orders would conflict with the primary Law.
3) A robot must protect its own existence as long as such protection doesn't conflict with the First or Second Law.”
Who To Be Held Responsible?
Artificial intelligence is already making significant inroads in taking over mundane or time-consuming tasks many humans would rather assign to someone else. The responsibilities and consequences of delivering work to AI vary greatly, though; some autonomous systems recommend music or movies; others recommend sentences in court. Even more advanced AI systems will increasingly control vehicles on crowded streets, raising questions about safety and liability, when the inevitable accidents occur.
Philosophical arguments over AI’s existential threats to humanity are often far away from the truth of actually building and using the technology in question. Deep learning, machine vision, etc—despite all that has been written and discussed these and other aspects of artificial intelligence, AI remains at a comparatively early stage in its development. Pundits argue about the risks of autonomous, self-aware robots running amok, whilst computer scientists think the way to write machine-vision algorithms which tell the difference between an image of mountains and that of the sea.
Still, it is obviously important to think through how society will manage AI before it becomes a really prevalent force in modern life. Researchers, students and alumni at Harvard University’s Kennedy School of State launched The Future Society for that very purpose in 2014, with the goal of stimulating international conversation about how to govern emerging technologies—especially AI.[2]
What are the main concerns about AI?
People talk about risk in different ways, but what we are most interested in is the risk that involves surrendering to machines decisions that affect our rights, liberty or freedom of opportunity. We make decisions not just supported by rational thinking but also by values, ethics, morality, empathy and a way of right and wrong—all things that machines don’t inherently have. In addition, people can be held in charge of their decisions in ways in which machine cannot.
Time and again it has been shown in the movies that machines have taken over the world. These depictions have somewhere or the other made us a question or even believe whether that would really be a possibility, a dystopian world lead by the machines.
In the next article, which will be a continuation of this article, we’ll be discussing the legality of an artificially intelligent machine.
[1] Artificial Intelligence, tutorialspoint.com, (September 29, 2020, 09:11 PM), https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_overview.htm
[2] Artificial Intelligence : Who’s To Blame?, (September 29, 2020, 09:17 PM), https://www.scl.org/articles/10277-artificial-intelligence-who-s-to-blame