Artificial Intelligence

What Is AI Safety?

This post is the second in a series on AI Safety. In this post, I argue for the importance of AI Safety, presenting the arguments I find most compelling. The next post in this series will argue the opposing view, presenting arguments against the importance of AI Safety. In a fourth post, I will contrast the two viewpoints, highlight the side I find most compelling, and indicate why.

Part 1 presented the problems that AI Safety is working to solve. AI Safety proponents make two claims about these problems.

  1. They are important problems that need to be solved.
  2. They should be solved by a combination of technical research and norm/institution/governance development.

I present arguments supporting these claims using a modified version of my preferred Impact Framework.

Problem Importance

Scale (Is AI Safety a big problem?)

AI Safety’s importance is directly proportional to the importance of AI. If AI never reaches massive scale, AI Safety will not become important. Conversely, if AI reaches the scale of the internet, AI Safety will be very important. AI is growing quickly, therefore, its scale needs to be considered over various timeframes to account for its rapid growth.

When I think about the current state of AI, I don’t think about it as a technology with massive scale. Global spending on AI is in the 10’s of Billions. Compare that to the tech sector which contributes ~1.2 trillion to US GDP. AI is a percent of a percent of the global economy. Despite use in many products, it has not reached a level of pervasiveness synonymous with the internet or cell phones. AI is used in applications such a fraud detection, image recognition, and natural language processing but it hasn’t found a killer app that transforms the world as we know it.

Looking forward, I expect AI to reach impressive scale in the next few years. AI is growing over 40% per year. Companies are hiring Machine Learning Engineers as quickly as they can. AI will find its first killer app in the form of self-driving cars, with many companies projecting releases in the early 2020s. The next 5 to 10 years will see AI used in more areas of our lives, with an impact that resembles the early days of the internet.

AI’s scale becomes most interesting when I think about it in the long term. Instead of predicting the state of the world at a specific time, I consider AI’s impact in the far future(I address timing later in this article). Given enough time, AI will eventually be capable of any task a human is capable of. I doubt intelligence requires a biological substrate. I also doubt humans have reached an “intelligence peak”, where it is impossible to be more intelligent than we are. I, therefore, expect AI to meet and exceed our intelligence. I can’t imagine the world that results but it is clearly one in which AI is the most important technology ever developed.

I’m drawn to the impact of AI in the far future. I can’t help but fixate on the importance of a technology that commodifies intelligence. Sure, AI has a modest impact today. It affects my day to day life only in small ways. Sometime between now and the indeterminate future, however, AI will become the most important technology ever invented. Even the internet will pale in comparison. In a world of commoditized intelligence, the importance of AI Safety can’t be understated.

Neglectedness (Is there a lack of investment in AI Safety?)

Investment in the various problems in AI Safety is mixed. Some areas of AI Safety receive significant investment while others are overlooked.

Alignment research, for example, is highly neglected. It has only 50-100 active researchers. Total spending by organizations working explicitly on AI Safety is in the low 10’s of millions per year. This seems too low when compared with the 10’s of Billions spent on general AI R&D (though it considers only organizations that work explicitly on AI Safety).

In contrast, safety areas such as privacy receive significant investment by the broader AI community. A quick search or two shows that, in 2019 alone, at least a few dozen papers have been published on privacy in Machine Learning. Anecdotally, I hear privacy discussed frequently in mainstream AI discussions, such as this machine learning podcast. Companies are naturally incentivized to think about privacy, fairness, transparency, and security as there are already norms and laws governing those areas in the broader tech ecosystem.

My impression is that some areas of AI Safety that are highly neglected. Ethics, alignment, policy, and misuse do not receive sufficient attention. Privacy, fairness, transparency, and security naturally receive moderate investment.

Leave a Reply

Your email address will not be published. Required fields are marked *