Why We Need to Fine-Tune Our Definition of Artificial Intelligence

AI, Artificial Intelligence, Robotics, Technology
Image source: Singularity Hub
By  – Singularity Hub

 

Sophia’s uncanny-valley face, made of Hanson Robotics’ patented Frubber, is rapidly becoming an iconic image in the field of artificial intelligence. She has been interviewed on shows like 60 Minutes, made a Saudi citizen, and even appeared before the United Nations. Every media appearance sparks comments about how artificial intelligence is going to completely transform the world. This is pretty good PR for a chatbot in a robot suit.

But it’s also riding the hype around artificial intelligence, and more importantly, people’s uncertainty around what constitutes artificial intelligence, what can feasibly be done with it, and how close various milestones may be.

There are various definitions of artificial intelligence.

For example, there’s the cultural idea (from films like Ex Machina, for example) of a machine that has human-level artificial general intelligence. But human-level intelligence or performance is also seen as an important benchmark for those that develop software that aims to mimic narrow aspects of human intelligence, for example, medical diagnostics.

The latter software might be referred to as narrow AI, or weak AI. Weak it may be, but it can still disrupt society and the world of work substantially.

Then there’s the philosophical idea, championed by Ray Kurzweil, Nick Bostrom, and others, of a recursively-improving superintelligent AI that eventually compares to human intelligence in the same way as we outrank bacteria. Such a scenario would clearly change the world in ways that are difficult to imagine and harder to quantify; weighty tomes are devoted to studying how to navigate the perils, pitfalls, and possibilities of this future. The ones by Bostrom and Max Tegmark epitomize this type of thinking.

This, more often than not, is the scenario that Stephen Hawking and various Silicon Valley luminaries have warned about when they view AI as an existential risk.

Those working on superintelligence as a hypothetical future may lament for humanity when people take Sophia seriously. Yet without hype surrounding the achievements of narrow AI in industry, and the immense advances in computational power and algorithmic complexity driven by these achievements, they may not get funding to research AI safety.

Some of those who work on algorithms at the front line find the whole superintelligence debate premature, casting fear and uncertainty over work that has the potential to benefit humanity. Others even call it a dangerous distraction from the very real problems that narrow AI and automation will pose, although few actually work in the field. But even as they attempt to draw this distinction, surely some of their VC funding and share price relies on the idea that if superintelligent AI is possible, and as world-changing as everyone believes it will be, Google might get there first. These dreams may drive people to join them.

Yet the ambiguity is stark. Someone working on, say, MIT Intelligence Quest or Google Brain might be attempting to reach AGI by studying human psychology and learning or animal neuroscience, perhaps attempting to simulate the simple brain of a nematode worm. Another researcher, who we might consider to be “narrow” in focus, trains a neural network to diagnose cancer with higher accuracy than any human.

Where should something like Sophia, a chatbot that flatters to deceive as a general intelligence, sit? Its creator says: “As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability. I am more interested in the use of Sophia as a platform for general intelligence R&D.” This illustrates a further source of confusion: people working in the field disagree about the end goal of their work, how close an AGI might be, and even what artificial intelligence is.

Stanford’s Jerry Kaplan is one of those who lays some of the blame at the feet of…Continue reading

 

Article source: https://singularityhub.com/2018/06/20/why-we-need-to-fine-tune-our-definition-of-artificial-intelligence/ 

Leave a comment