₿ Explore the crypto world with us...

₿ EXPLORER
AI Blog

Ensuring AI Safety: Insights from AI Safety Summit

The AI safety summit took place in November 2023, and focused on the risks of misuse and loss of control associated with frontier AI models. In 2024, The US and UK forged a new partnership on the science of AI safety.

AI safety research ranges from foundational investigations into the potential impacts of AI to specific applications. On the foundational side, researchers have argued that AI could transform many aspects of society due to its broad applicability, comparing it to electricity and the steam engine.


Why AI Safety Research?

Ensuring AI Safety: Insights from AI Safety Summit

In the near term, the goal of keeping the societal impact of AI beneficial motivates research in many areas, from economics and law to verification, validation, safety, and control of technical systems. If your laptop crashes or is attacked, it’s little more than a minor inconvenience, but if an AI system is controlling your car, plane, pacemaker, automated trading, power grid, or other critical system, it becomes much more important that the AI system does what you want it to do. Another near-term challenge is to prevent a destructive arms race in lethal autonomous weapons.

In the long term, a key question is what will happen if the pursuit of strong AI is successful and an AI system becomes better than humans at all cognitive tasks. As I.J. Good pointed out in 1965, designing better and more intelligent AI systems is itself a cognitive task. Such a system could trigger an intelligence explosion by far surpassing human intelligence, potentially leading to self-recursive improvement. By inventing revolutionary new technologies, such a superintelligence could help us eradicate war, disease, and poverty, and therefore the creation of strong AI could be the greatest event in human history. However, some experts have expressed concerns that this could be the last one, unless we learn to align our goals with theirs before they become superintelligent.


How Could AI Be Dangerous?

Most researchers agree that it is unlikely that a superintelligent AI would exhibit human emotions like love or hate, and there is no reason to expect AI to be intentionally benevolent or malevolent. Instead, experts are likely considering two scenarios when they think about how AI could pose a risk:

  • AI is programmed to do something destructive: Autonomous weapons are AI systems that are programmed to kill. In the wrong hands, these weapons could easily cause mass casualties. Additionally, an AI arms race could unintentionally lead to an AI war, also resulting in mass casualties. To avoid being disabled by the enemy, these weapons would be designed so that it would be extremely difficult to simply “turn them off,” so humans could reasonably lose control of such a situation. This risk is present even with narrow AI, but it grows as the levels of AI intelligence and autonomy increase.
  • AI is programmed to do something helpful, but develops a destructive method to achieve its goal: This could happen when we fail to fully align AI’s goals with ours, which is strikingly difficult. If you tell an obedient smart car to take you to the airport as quickly as possible, it could take you there by being chased by helicopters and doing exactly what you mean. If a superintelligent system is tasked with an ambitious geoengineering project, it could harm our ecosystem as a side effect and see human attempts to stop it as an existential threat.

As these examples show, the concern with advanced AI is not about malevolence, but about competence. A superintelligent AI would be extremely good at achieving its goals, and if those goals are not aligned with ours, we have a problem. You are probably not a malevolent ant enemy, but if you are in charge of a hydroelectric green energy project and there is an ant nest in the area, it will be very bad for the ants. The main goal of AI safety research is to ensure that humanity never ends up in the place of those ants.


Why is there so much interest in AI safety?

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other leading figures in science and technology have recently expressed their concerns about the risks posed by AI in the media and through open letters, which many leading AI researchers have also signed. So why is the subject suddenly making headlines?

The idea that the pursuit of strong AI will eventually be successful was long thought of as science fiction, centuries or more in the future. However, thanks to recent breakthroughs, many AI milestones that experts only five years ago thought were decades away have now been reached, leading many experts to take the possibility of superintelligence in our lifetime seriously. While some experts still predict that human-level AI is centuries away, the 2015 Puerto Rico Conference found that most AI research predicts it will happen before 2060. Since it could take decades to complete the necessary safety research, it would be prudent to start now.

Since AI has the potential to be more intelligent than any human, there is no way to predict for sure how it will behave. We cannot base ourselves


You may also like this content

Follow us on TWITTER (X) and be instantly informed about the latest developments…

MetaversePlanet

Metaverse Planet is your gateway to the exciting world of artificial intelligence. On this platform, you can find everything related to artificial intelligence:

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button