Artificial Intelligence

How Will Safety Change In AI?

Normally, I would look at the impact of previous solutions to a problem and compare that with the proposed solution. In this case, AI is nascent and AI Safety is even more so. There is not a long history of attempted solutions to compare with the proposed solutions. Instead, I look at instructive historical analogies which give insight into the impact AI Safety research might have.

Computer Security

Early operating systems (UNIX/Windows) were not designed with security in mind. The internet suffers from the same problem, it wasn’t designed for security first. The result: cybercrime costs $600+ Billion per year. Computer security didn’t have to be this bad.

Research into provably secure operating system‘s was performed as early as the 1970s. Today, some small operating systems claim to be provably secure. If the world ran these operating systems and they were even a single order of magnitude more secure, $100’s of billions would be saved each year. Nearly every computer in the world runs a variant of Windows or UNIX. Switching over billions of computers now would cost far more than dealing with the expense of cybercrime today.

This analogy to computer security brings three insights.

First, it is possible to foresee major problems, research them, and solve the problem before it becomes widespread. AI Safety can follow the lead of security researchers of the 1970s and work on problems with AI before they are too large to be solved. Fixing the problems now and building robust AI will save $100’s of Billions in future expenses.

Second, now might be our only opportunity to solve problems with AI. As with computer security, it will be too expensive to replace ubiquitous unsafe systems with safe ones.

Third, research alone is insufficient. Research only matters if it is actually used. Researchers designed provably secure operating systems in the ’70s but the operating systems that went mainstream did not implement that research. If AI Safety research is not implemented when powerful AI is built, unsafe AI will be widely distributed. Establishing norms and governance are every bit as essential as research and must be a priority of the AI Safety community.

Nuclear Technology

The first nuclear bomb was tested in 1945. Nuclear chain reactions, the key insight that enabled the nuclear bomb, was first envisioned in 1933. By 1947, the cold war started. Shortly thereafter, in 1949, the Soviet Union tested its first nuclear bomb and the nuclear arms race began.

Imagine a world where, in the years from 1933 to 1945, a group of politicians, scientists, etc was building norms, institutions, and governance aimed at preventing a nuclear arms race. At the time, this group would have been ridiculed. Had they been successful, humanity would have avoided the precarious position it was in during the cold war and is still in today.

The nuclear analogy is instructive. Anyone predicting doom is likely to be shamed as a Luddite and scaremonger. In 1933, virtually no-one could have predicted a nuclear arms race. 16 years later, it was a reality. Right now, it seems unlikely that an AI Arms race will start in a mere 16 years, but predicting so far into the future is extremely difficult. Today, we would praise anyone who fought to prevent a nuclear arms race. In the future, we might do the same for those who kept AI safe.

These analogies make me hopeful. Research and norm building can solve problems with AI. From a technical perspective, operating system security is largely a solved problem. Likewise, we can solve the technical problems in AI Safety. From a norms/institutions perspective, the risk of nuclear war is significantly lower than it was during the cold war.  We can build upon success in establishing norms for nuclear weapons and create better norms and institutions for governing AI. Working on these problems now will save trillions of dollars and significantly reduce the risk of a catastrophe involving AI. The potential impact is massive.

Learning Value

Starting AI Safety work now will reveal what more can be done. It is the fastest way to discover the value of the proposed solutions. Early work will build expertise in AI Safety and enable those experts to keep up with changes in the broader AI community. It is the best way to discover alternative solutions to research and governance. The potential learning value is at least moderately large.

Confidence

My confidence in the conclusions above is low. There is too little AI Safety research to look back at and determine its impact. Most of the impact is focused on the future. Analogies to other fields are instructive but not conclusive.

My biggest concerns with the conclusions above are:

  1. Normal AI researchers have a natural incentive to solve some of these problems. They are likely to work on many items such as privacy without prompting from the AI Safety community. AI researchers do not have “build unsafe AI” as a goal so it could be that these issues don’t require a separate AI Safety community.
  2. I may be overestimating the size of the problems. Is AI actually an extinction level risk? Will AI be as important as the invention of computers/the internet? It’s not clear yet.
  3. Are research and norm building good solutions to the problems? Research is easy to ignore. Norms can be violated. The only sure-fire solution to the problems of AI is to not build AI, but that would mean foregoing a massively beneficial technology.

Conclusion

The conclusion I draw from all of these arguments is that society should prioritize AI Safety research at least moderately high. I would summarize my view as:

  • AI and AI Safety are going to be very important in the future. They are moderately important now.
  • Alignment, ethics, policy, and misuse do not receive sufficient attention. Privacy, fairness, transparency, and security receive some attention (but maybe not enough).
  • AI Safety research + norm building are probably solvable but the best way to find out is to work on them.
  • The risk from AI is very uncertain with a large potential downside. We should proceed cautiously and make at least a moderate investment into mitigating the large downside.
  • Starting now is the best way to learn.
  • I am not confident in my conclusions.

This is not a satisfying conclusion. I would prefer to have a well-formulated argument either for or against AI Safety. The most important takeaway is that there is a lot of uncertainty in AI Safety and proceeding caut

Leave a Reply

Your email address will not be published. Required fields are marked *