AI: Can A Machine Ever Be Human, Convincingly? 

Feb 16,2019 by Aradhye Ackshatt
4675 Views

The inclusion of ‘learning abilities’ – mostly thought unique to humans and very few other evolved primates – defines artificial intelligence to a large extent. Faced with unfamiliar situations, how the program deals with the problems and attempts to solve them is key to identifying a stretch of software code as ‘artificially intelligent’.

Artificial Intelligence has made the leap from science fiction to real life in a short matter of time. It was initially envisioned as a panacea for the intricate but repetitive processes that aided scientific research and technological advancement – a role it has fulfilled and, in many instances, surpassed.

Training a program by making it understand a variety of sensory inputs, whether in the form of digital or analog data, does not mean that program has ‘intelligence’. The result of this factor being used to decide the intelligence of software leads to various technologies that were quite revolutionary at their inception now being classified as routine programs, because their previously groundbreaking tasks have become rudimentary in today’s advanced day and age.

A Brief History of AI

Automation has been a pursuit of humanity since classical Greek antiquity. The word ‘automaton’ itself is used by Homer to refer to machines acting according to their own will. There is ample evidence in literature and history that shows how we have striven to recreate machines that not only look like us, but walk, talk and act like us. The more successful efforts towards such aims are said to be in the ‘uncanny valley’, an uncomfortable state which results from the almost, but not entirely, accurate depiction of human beings by doppelganger machines.

See also  The Fickle Drift of Business Consultancy

Interesting Article to Read : Chatbots & Live Chat | A Sprint to Sublime Customer Service

Alan Turing was instrumental in making artificial intelligence a practical field. Approaching AI in purely mathematical binary terms, digitization was used as the platform to erect expert systems, which use inference engines and knowledge bases to make decisions. Moore’s Law, which predicted computing power rising up while component sizes reduced, still remains applicable, albeit to a slightly lesser extent.

Now, with data surging forth from all sorts of sources right from our handheld devices to astronomical observations and literal rocket science, machines that have been developed specifically to ‘think like a human’ are rapidly being deployed in a variety of fields, form bioengineering to synthetic medicine. Nearer our daily lives, search engines [one (followed by a hundred zeros) in particular, but all of them in general] and flagship smartphones use all the learnings gleaned from AI to deliver ‘personalized experiences’ right into our hands!

We Are Already AI-ed, Daily!

In 2014, Stephen Hawking gave a subliminal quote on AI: “It [AI] would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.

While such a day still seems far off as of now, the quest for replicating human thought patterns and response heuristics continues unabated. Programmers in diverse fields toil away every day at their projects, attempting to reproduce the thought processes that make up the human mind. They have to take many factors into consideration, not the least of which is the ethical complication in ‘fooling’ a human into thinking they are conversing – or, at basic levels, interacting – with a machine.

See also  How Data Centers Must Evolve to Enable 5G and Deliver the IoT?

We are already carrying out a great deal of everyday interactions with artificial intelligence. The level to which it affects the technology in the palm of our hands is difficult to identify at the user level. To delve deeper, we have to break down the integral components of interactions amongst humans and machines – a task easier said than done.

The question I asked at the beginning is hard to answer, because it is rooted in the future. At Cyfuture, we are accustomed to asking questions that require a certain kind of ‘never giving in’ mindset to answer – for laterally solving problems or creating innovative solutions to increase the effectiveness of existing legacy systems, as well as drive businesses better.

1
Leave a Reply

avatar
1 Comment threads
0 Thread replies
1 Followers
 
Most reacted comment
Hottest comment thread
  Subscribe  
newest oldest
Notify of
trackback
The Good, Bad and Ugly of Artificial Intelligence

Take the example of Google. It bought Deep Mind for a whopping $ 650 million crushing the other bids from its rival Facebook. Google has been seriously investing in robotics and machine-learning companies in the last decade. It even crafted the inventive Google Brain workforce to research and study AI.