Roman V. Yampolskiy (Russian: Роман Владимирович Ямпольский) is a computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He is the founder and as of 2012 director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville. Source: wiki

Steven Bartlett interviews Dr. Roman Yampolskiy on his The Diary of a CEO podcast which Spotify Wrapped ranked fifth in its list of the top five most popular podcasts in 2024. The interviews are always long, giving the listener (or viewer if you watch on YouTube) a good sense of the person being interviewed but this video is only about 20 minutes.
As I’m collecting my thoughts, character arcs, conflicts, and other pieces of my Something About AI sequel (not yet named, I just call it Book 2) I’ve been watching a few interviews like this. It’s a way for an outsider like me to keep up with what’s happening in the AI space. These are some of my thoughts about this particular interview.
Yampolskiy uses a term I haven’t heard before but now I won’t forget it: Uncontrolled Super Intelligence
1:07 Here we’re inventing a replacement for human mind. We are inventing a replacement for human mind.
I’m intrigued by the idea that never before have humans invented something that can invent things. You rarely see AI referred to in this way. It’s easy to see how, in this context, AI can take over all the major aspects of human civilization.
1:58 “We as humans have this built-in bias about not thinking about really bad outcomes and things we cannot prevent.”
This is something I’ve marveled about with regard to climate destabilization. In this case it’s not just not wanting to think about the worst possible outcome, it’s also about the quest to be first, make the most money, become the King of the Hill in your field.

That gave me pause. That came to mind many times during the Eco Philosophy course I took earlier this year. I think the philosophy was more about coping than anything else.
4:10 “Without question, there is nothing more important than getting this right.”
So out of all the problems in the world this is the worst of the worst in the mind of Yampolskiy. I’d guess Hinton and others may feel the same way. Is it just normal that the problems people focus on the most are their #1 choice as needing to be fixed? Right or wrong. Comment.
At about 5:30 he compares the dangers of nefarious humans wielding AI as a tool – and that’s a huge thing to worry about but Yampolskiy is most concerned (5:45) about, “not the humans who may add additional malevolent payload, but at the end still doesn’t control it.”
It isn’t just individuals / corporations that are seeking to dominate in the field. The consensus I’ve read about from many sources, including Yampolskiy, is that “whoever has more advanced AI has more advanced military. No question.” (7:46)
7:48 “But the moment you switch to super intelligence, uncontrolled super intelligence, it doesn’t matter who builds it, us or them.”
9:46 “Bartlett: Is it then there is a future wehre some guy in his laptop is going to be able to create super intelligence without oversight or regulation or employees, etc? Yampolskiy: Yeah.”
I had to pause here to ponder that for a few moments. Damn. That’s scary, isn’t it?
Bartlett asks which potential path to human extinction is the one most likely. It surprised me that his choice was nefarious humans using AI to be very evil; like, using AI to create a virus that will kill everyone. Seems to me that can happen easily enough with AI simply from the melting permafrost releasing some pathogen that’s 10s of thousands of years old.
Then Yampolskiy clarified that an uncontrolled super intelligence will be able to come up with completely novel ways of doing it. (13:17) Yeah. We can’t imagine what we can’t imagine. I’ve been spending hours thinking about what could happen. What is the end of my Book 2 going to be? Man, it’s hard!
14:23 Bartlett: “… you’re telling me that the people that have built that tool don’t actually know what’s going on inside there. Yampolskiy: That’s exactly right.”
The entire answer by Yampolskiy that goes until 15:11 is very revealing.
15:05 “But we still discover new capabilities on old models.”
This is the real crux of it. They continue to learn and humans can’t keep up.
Yampolskiy doesn’t have much nice things to say about Sam Altman. But none of it is revelatory. We’ve read before about how Altman is not exactly honorable.
I see that this interview with Yampolskiy has been chopped from the full video which you can see here: These are The Only 5 Jobs That will Remain in 2030. In that one, he claims why superintelligence could trigger a global collapse as early as 2027. My favorite part of that interview starts at 56:10 where he talks about us living in a simulation.
Yampolskiy has three children and when asked what is his advice it is this: Live your best life right now, every day. Amen brother.
Bartlett interviewed Geoffrey Hinton a couple of weeks ago but I liked Hinton’s interview with Jon Stewart better. Notes on that one will be next.