Shafaq News/ Artificial intelligence (AI) experts issued a warning against the "acceleration" of AI development, citing potential risks akin to the "threat of terrorists."

During a congressional hearing, they expressed concerns that uncontrolled advancement in AI could lead to catastrophic consequences, including the potential manufacturing of "biological weapons." In light of this, they called on the United States to take the lead in implementing regulations to govern the use of AI.

The testimony before Congress by the three leaders in AI raised alarm bells about the potential damage that rapid AI development could inflict in the coming years, likening the risks to those posed by terrorists employing technology for malevolent purposes.

Yoshua Bengio, a prominent AI professor at the University of Montreal and considered one of the pioneers of modern AI science, emphasized the need for international cooperation to regulate AI and nuclear technology. Bengio asserted, "The United States must lead efforts to legalize and govern the use of artificial intelligence while also establishing regulations for nuclear technology on a global scale."

Dario Amoudi, CEO of Anthropic, an AI startup, warned of the dangers AI could pose if misused, stating, "The fear of advanced artificial intelligence lies in its potential to produce dangerous viruses and other biological weapons within a concise timeframe, perhaps as little as two years."

Stuart Russell, a computer science professor at the University of California, Berkeley, emphasized the complexity of understanding and controlling AI compared to other technologies, underlining the urgency of developing appropriate regulatory frameworks.

The concerns expressed during the congressional hearing highlight the growing apprehension surrounding AI surpassing human intelligence and the potential risks it poses to humanity, which have shifted from science fiction to real-world possibilities.

In recent months, several prominent AI researchers, including Yoshua Bengio, have been vocal in their warnings about the associated risks of AI development. They urge policymakers and governments to acknowledge these threats and prioritize legislation to mitigate potential dangers.

The congressional hearing took place shortly after prominent AI companies such as OpenAI, Google's Alphabet Inc, and Meta voluntarily committed to implementing safety measures, including watermarking content created by AI, to enhance the security and responsible use of AI technology.