
Episode Transcript: #2345 - Roman Yampolskiy
Follow along with the complete transcript of this episode. The transcript automatically syncs with the video - highlighting the current segment and scrolling as you watch. Click any part of the transcript to jump to that timestamp.
Well, thank you for doing this. I really appreciate it.
My pleasure. Thank you for inviting me.
This subject of the dangers of AI, it's very interesting because I get two very different responses from people dependent upon how invested they are.
In AI financially.
The people that have AI companies or are part of some sort of AI group all are like, it's going to be a net positive for humanity.
I think overall we're going to have much better lives. It's going to be easier. Things will be cheaper. It'll be easier to get along.
And then I hear people like you and I'm like, why do I believe him?
It's actually not true. All of them are on record.
They're saying this is going to kill us.
Whatever it's Sam Altman or anyone else, they all at some point were leaders in AI safety work.
They published an AI safety and their PDOAM levels are insanely high.
Not like mine, but still 20, 30 percent chance that humanity dies is a little too much.
Yeah, that's pretty high. But yours is like 99.9.
It's another way of saying we can't control superintelligence indefinitely. It's impossible.
When did you start working on this?
A long time ago. So my PhD was I finished in 2008. I did work on online casino security, basically preventing bots.
And at that point, I realized bots are getting much better.
They're going to outcompete us, obviously, in poker, but also in stealing cyber resources.
And from then on, I've been kind of trying to scale it to the next level AI.
It's not just that, right? They're also they're kind of narrating social discourse, bots online.