Iwas in the process of scaling down my work at Skype when I stumbled upon a series of essays written by early artificial intelligence researcher Eliezer Yudkowsky, warning about the inherent dangers of AI.
I was instantly convinced by his arguments and felt a combination of intrigue, interest and bewilderment. Why hadn’t I figured this out? Why was nobody else talking about this kind of thing seriously? This was clearly a blind spot that I had fallen prey to.
It was 2009 and I was looking around for my next project after selling Skype a few years prior. I decided to write to Yudkowsky. We met up, and from there I began thinking about the best way to proceed with this type of research.
By the following year, I had dedicated my time to existential risk mitigation with a focus on AI. I was talking to reporters, giving speeches about this topic and speaking with entrepreneurs, culminating with my investment in artificial intelligence company DeepMind in 2011.
For decades now, I have served as someone within AI groups who tries to facilitate some kind of dialogue about the risks of this research, first on a personal basis and then through the Future of Life Institute—a nonprofit organization I co-founded that aims to reduce risks to humanity, particularly those from advanced AI.