Yoshua Bengio (L) and Max TEGMARK (R) discuss the development of general artificial intelligence during a direct cnbc podcast recording “Beyond the Valley” in Davos, Switzerland in January 2025.
Cnbc
The general artificial intelligence built as “agents” can prove dangerous as its creators can lose control of the system, told the two most prominent scientists in the world, CNBC.
In the latest episode of the CNBC Podcast “Beyond the Valley” released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future Institute of Life, and Yoshua Bengio, called one of “Kumbat And he “” and a professor at the University de montréal, spoke about their concerns about the general artificial intelligence, or Agi.
Their fears are derived from the world’s largest firms that now talk about “agents of he” or “aiicic” – which companies claim to allow him chatbots to act as assistants or agents and help work and life daily. Industry estimates vary when Agi will exist.
With that concept comes the idea that systems of it may have some “agencies” and their opinions, according to Bengio.
“Researchers in it are inspired by human intelligence to build machine intelligence, and, in humans, there is a mix of ability to understand the world as pure intelligence and agent behavior, meaning … use your knowledge of it achieved goals “, Bengio told CNBC” beyond the valley “.
“For now, this is how we are building agi: we are trying to make those agents who understand a lot about the world, and then can act accordingly. But this is actually a very dangerous proposal . “
Bengio added that following this approach would be “Creating a new species or a new intelligent entity on this planet” and “not knowing whether they will behave in ways that agree with our needs”.
“So, on the contrary, we can consider, what are the scenarios in which things go wrong and all rely on agencies? In other words, this is because it has its goals that we can be in trouble.”
The idea of self-preservation can also begin, as it becomes even smarter, Bengio said.
“Do we want to be in competition with entities that are smarter than us? It’s not a very soothing gambling, right?
He tools key
For the mit, the key lies in the so-called “tool it”-systems that are created for a specific, closely defined purpose, but that should not be agents.
Tegmark said a tool he can be a system that shows you how to cure cancer will be able to control it. “
“I think, in an optimistic note here, we can have almost everything we are excited about by him … If we just insist on having some basic security standards before people can sell powerful systems, “Said Tegmark.
“They have to demonstrate that we can keep them under control. Then the industry will renew quickly to understand how to make it better.”
TEGMARK’s next life institute in 2023 called for a pause for systems that can compete with human level intelligence. While this has not happened, Tegmark said people are talking about the topic, and now is the time to take action to understand how to place guards in the country to control Agi.
“So at least now many people are talking about the conversation. We have to see if we can make them walk on the walk,” told Tegmark “beyond the CNBC valley.
“Clearly is clearly crazy for us people to build something smarter than us before understanding how to control it.”
There are some images of when Agi will arrive, partly directed by different definitions.
Openai’s general manager, Sam Altman, said his company knows how to build Agi and said he would get more quickly than people think, though he minimized the influence of technology.
“My assumption is that we will hit Agi faster than most people in the world think and it will matter much less,” Altman said in December.