Since artificial intelligence (AI) came to the forefront of current events with the resounding entrance on the scene of the already ubiquitous ChatGPT , there has been talk (ad nauseam) about both the possibilities and the dangers lurking in the bowels of the most popular technology of the moment.
films have already foreshadowed over the past few decades, AI could indeed end up rebelling against its creators and condemning them to extinction.
A group of CEOs, researchers and engineers with a focus on AI has issued a succinct but forceful 22-word statement claiming that machines could potentially be lethal to humanity: “Mitigating the risk of eventual AI-driven extinction should be a global priority on par with other dangers such as pandemics and nuclear wars . ”
This statement, which was issued by the non-governmental organization Center for AI Safety, is signed by leading figures in the field of AI such as Demis Hassabis, CEO of Google DeepMind, Sam Altman, CEO of OpenAI and Geoffrey Hinton and Youshua Bengio (two of the three researchers awarded the prestigious Turing Award in 2018 for their work in the field of AI).
This is not the first time that a group of experts has sounded the alarm about the potential risks of AI. Last March, an open letter was published that advocated a six-month pause in the development of AI . That letter was, however, met with criticism. Some experts considered that the letter exaggerated the risks that AI could potentially pose, while others agreed with the danger that is actually hidden in this technology, but not with the remedy proposed by those who signed the letter.
Center for AI Safety has deliberately opted for laconicism in its statement
Dan Hendrycks, executive director of the Center for AI Safety, told The New York Times that the brevity of the statement issued today by his organization—which does not suggest specific ways to mitigate the threat posed by AI—is deliberate and is ultimately intended to avoid potential disagreements. “We didn’t want to put a big menu of 30 potential interventions on the table,” Hendrycks says. “When that happens, the message gets diluted.”
Hendrycks refers to this statement as a kind of “coming out” by those figures in the AI uruguay number screening field who are genuinely concerned about the potentially damaging development of this technology. “There is a misunderstanding, even within the AI community itself, that there are only a handful of birds of ill omen,” he stresses. “However, the truth is that in private many people express their concerns about AI,” he adds.
The debate over the dangers of AI is anchored in hypothetical scenarios in which artificial intelligence systems rapidly increase their capabilities and then stop operating safely. Many experts warn that once AI systems reach a certain level of sophistication, it may become impossible to control their actions.
Other experts, however, cast doubt on such dire predictions , pointing to the inability of AI systems to handle even relatively mundane tasks such as driving a car. Despite years of effort and billions of dollars of investment in this area of research, fully autonomous cars are far from becoming a reality. And if AI cannot successfully handle this challenge, it is unlikely that the technology will ever go so far as to match or even surpass humans, skeptics argue.
Yet both the most fatalistic and the most skeptical agree that AI has a whole host of threats up its sleeve, from mass surveillance to disinformation.
And, as a plethora of science fiction
-
- Posts: 21
- Joined: Sat Dec 21, 2024 5:51 am