One of the three “godfathers of AI” has said it would not take over the world or permanently destroy jobs. Prof Yann LeCun said some experts’ fears of AI posing a threat to humanity were “preposterously ridiculous”. Computers would become more intelligent than humans but that was many years away and “if you realize it is not safe you just do not build it,” he said.
A UK government advisor recently told the BBC that some powerful artificial intelligence might need to be banned. In 2018 Prof LeCun won the Turing Award with Geoffrey Hinton and Yoshua Bengio for their breakthroughs in AI and all three became known as “the godfathers of AI”. Prof LeCun now works as the chief AI scientist at Meta, the parent company of Facebook, Instagram and WhatsApp. He disagrees with his fellow godfathers that AI is a risk to the human race.
“Will AI take over the world? No, this is a projection of human nature on machines” he said. It would be a huge mistake to keep AI research “under lock and key”, he added. People who worried that AI might pose a risk to humans did so because they could not imagine how it could be made safe, Prof LeCun argued.
“It is as if you asked in 1930 to someone how are you going to make a turbo-jet safe? Turbo-jets were not invented yet in 1930, same as human level AI has not been invented yet.” “Turbo jets were eventually made incredibly reliable and safe,” and the same would happen with AI he said.
Meta has a large AI research program and producing intelligent systems as capable as humans is one of its goals. As well as research, the company uses AI to help identify harmful social media posts. Prof LeCun spoke at an event for invited press, about his own work in so-called Objective Driven AI which aims to produce safe systems that can remember, reason, plan and have common sense— features popular chatbots like ChatGPT lack.
He said there was “no question” that AI would surpass human intelligence. But researchers were still missing essential concepts to reach that level, which would take years if not decades to arrive. When people raise concerns about the human-level or above machines that might exist in the future, they are referring to artificial general intelligence (AGI). These are systems, which like humans, can solve a wide range of problems. There was a fear that when AGI existed scientists “get to turn on a super-intelligent system that is going to take over the world within minutes”, he said. “That’s you know just preposterously ridiculous.”
In response to a question from BBC news Prof LeCun said there would be progressive advances — perhaps you might get an AI as powerful as the brain of a rat. That was not going take over the world, and he argued “it is still going to run on a data center somewhere with an off switch”. He added: “And if you realize it is not safe, you just do not build it”.
Source: IBP