Is artificial intelligence getting out of control?

WHAT IS AI? Artificial intelligence (AI) is developing quickly, from Siri to self-driving automobiles. While robots with human-like traits are frequently depicted in science fiction, artificial intelligence (AI) may refer to anything from Google’s search engines to IBM’s Watson to autonomous weaponry.

Since it is created to execute a specific goal, narrow AI (also known as weak AI) is the right name for today’s artificial intelligence (e.g., only facial recognition, internet searches, or driving a car). However, many researchers want to develop general AI (AGI or strong AI) in the long run. AGI would exceed humans in practically every cognitive task, whereas narrow AI would surpass humans at whatever its particular job is, such as playing chess or solving mathematics.

WHY RESEARCH AI SAFETY?

In the foreseeable future, research in many fields—from economics and law to technological areas like verification, validity, security, and control—is motivated by the desire to keep AI’s effects on society positive. If an AI system is in charge of your automobile, airline, pacemaker, automated trading system, or power grid, it becomes much more crucial that it follows your instructions. While a laptop crash or hack may be little more than a minor inconvenience, this situation changes significantly. Preventing a deadly arms race in lethal autonomous weapons is another immediate problem.

What would happen in the long run if the drive for powerful AI is successful and an AI system surpasses humans in all cognitive tasks? I.J. Good noted in 1965 that creating more intelligent AI systems is in and of itself a mental endeavor. Such a system may undergo recursive self-improvement, resulting in an intelligence explosion far beyond human intelligence. The development of powerful AI may be the most crucial development in human history since it may enable us to end war, sickness, and hunger by creating ground-breaking new technologies. However, other scientists are worried that it may also be the last unless we figure out how to make the AI share our aims before it becomes superintelligence.

Some people doubt that powerful AI will ever be developed, while others are adamant that superintelligent AI will always be advantageous. At FLI, we are aware of both of these scenarios and the possibility that an AI system may inadvertently or purposefully inflict significant harm. To reap the benefits of AI while avoiding pitfalls, we think that research being done now will help us better plan for and prevent such potentially harmful repercussions in the future.

WHY IS AI SAFETY NOW OF INTEREST?

Many well-known figures in science and technology, including Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and others, have lately expressed alarm about AI’s dangers in the media and in open letters. Many top AI experts have joined them. Why are the headlines now referencing the topic?

Long considered science fiction, the eventual success of the search for powerful AI was centuries or more away. But because of recent developments, several AI milestones that scientists once thought were decades away have now been attained, leading many of them to take the potential of super intelligence emerging within our lifetimes. Most AI researchers at the 2015 Puerto Rico Conference predicted that human-level AI would exist by 2060, despite some experts’ continued predictions that this will take millennia. It is wise to start the necessary safety studies now because they might take decades to complete.

We have no guaranteed method to forecast how AI will act because it has the ability to become more intelligent than any human. We cannot draw as much inspiration from earlier technical advancements since we have never produced something capable of outwitting us, whether on purpose or accidentally. Perhaps our evolution is the most acceptable illustration of what we might encounter. People now rule the earth, but not because we are the best at being big, quick, or powerful. Are we guaranteed to maintain control if we aren’t the brightest people anymore?

According to FLI, civilization will prosper as long as we prevail in the contest between the expanding strength of technology and the wisdom with which we handle it. In the case of AI technology, FLI believes that the best approach to win that race is to encourage AI safety research rather than slow the former down.