Disinformation to job loss, what exactly are the dangers posed by AI?
content writers and researchers. New AI technology has the potential to replace some human workers, including content writers and researchers.

More than 1,000 experts, including technology leaders, scientists, and others working in the field of
artificial intelligence
Signed an open letter warning AI technologies pose "profound risks for society and humanity." Elon Musk, Tesla CEO, and Twitter owner, were among the group that urged AI laboratories to stop developing their most powerful systems during the next six months in order to better understand the risks of the technology. The letter stated that "powerful AI systems shouldn't be developed until we are sure their effects will positive and their risk will be manageable." The letter was short. It has now received more than 27,000 signatories. The letter's language was very broad. Some of the names who signed the letter appeared to have an uneasy relationship with AI. Musk is a prime example. He is launching his own AI company and is a major donor to the organization who wrote the letter. The letter reflected a growing concern from AI experts, who were concerned that the latest systems could harm society, notably GPT-4 (the technology developed by San Francisco-based startup OpenAI). They thought future systems would be even more dangerous. Some of these risks are already here. Some risks will arrive in the next few months or even years. Others are pure hypothetical.
Our ability to understand the potential pitfalls of very powerful AI systems, said Yoshua Benjamin, professor and AI researcher at University of Montreal. "We need to be extremely careful."
Why are they worried?
Bengio may be the most important signatory to the letter. Bengio has spent the last four decades working with two other academics, Geoffrey Hinton (who was until recently a Google researcher) and Yann LeCun (now chief AI scientist at Meta - the owner of Facebook). Together, they have developed the technology behind systems such as GPT-4. The researchers were awarded the Turing Award in 2018 for their work with neural networks. This award is often referred to as "the Nobel Prize of computing". A neural network is an analytical mathematical system which learns by analyzing data. Around five years ago companies like Google, Microsoft, and OpenAI started building neural networks which learned from large amounts of digital texts called LLMs. LLMs can generate their own text by identifying patterns in the text. This includes blog posts, poetry and computer programs. They can carry on a full conversation. This technology helps computer programmers and writers to generate new ideas and accomplish tasks more quickly. Bengio and others have also warned LLMs that they can be taught unwanted and unexpected behavior. These systems may produce untrue, biased or toxic information. GPT-4 systems can make up facts and misrepresent the truth, a phenomenon known as "hallucination". These problems are being addressed by companies. Bengio and other experts are concerned that researchers will introduce new dangers as they make these systems more powerful.
Short-term Risk: Misinformation
It can be difficult to distinguish between fact and fiction because these systems seem to deliver information with complete confidence. Experts worry that people may rely on the systems to provide medical advice, emotional support, and raw information for making decisions. Subbarao Kambhampati is a professor of Computer Science at Arizona State University. He said that there was no guarantee these systems would be accurate in any task they were given. Experts worry that people may misuse these systems in order to spread misinformation. They can be convincing because they can speak in a humanlike way. Bengio stated that "we now have systems which can interact with us using natural language and we cannot distinguish the real from fake".
Medium-term Risk: Job loss
Experts worry that AI could kill jobs. Currently, technologies such as GPT-4 are used to supplement human workers. OpenAI admits, however, that some workers could be replaced by these technologies. This includes people who moderate internet content. Yet, they cannot duplicate the work done by doctors, lawyers or accountants. They could replace paralegals as well as personal assistants and interpreters. OpenAI researchers wrote a paper estimating that LLMs could affect at least 10% or the work of 80% of U.S. workers and that at least 50% might be affected for 19%. Oren Etzioni is the founder CEO of the Allen Institute for AI in Seattle. He said that there are signs that rote tasks will disappear.
Loss of control: Long-term risk
Some of the people who signed this letter believe that artificial intelligence can slip out of our control, or even destroy humanity. Many experts, however, say this is a huge exaggeration. The Future of Life Institute is an organization that explores existential risks for humanity. They warn that AI systems could cause serious problems because they learn unanticipated behavior from the massive amounts of data they analyse. As companies integrate LLMs with other internet services they worry that these systems may gain unanticipated power because they can write their own code. Developers will be putting themselves at risk if they let powerful AI systems run their own code, say the experts. Anthony Aguirre is a theoretical cosmologist, physicist, and cofounder of the Future of Life Institute. He said that if you extrapolate where we are today to three years in the future, the situation will be very strange. "If you look at a less likely scenario, in which things really take off and there is no governance, or where these systems are more powerful than expected, then things can get crazy," said Aguirre. Etzioni dismissed talk of existential risks as hypothetical. He said that other risks, most notably the risk of disinformation, were not mere speculation. He said, "Now we are facing some real issues." They are real. They need a responsible response. "They may need regulation and legislation."