Listen now (44 mins) | I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind! We discuss: Why he expects AGI around 2028 How to align superhuman models What new architectures needed for AGI Has Deepmind sped up capabilities or safety more?
We’ve been predicting that we won’t be ready for AGI/ASI emergence both in science and in scifi for decades. Still holds true, even as the potential grows. If we’re really lucky, AGI isn’t possible, but I think that just powerful AI tools like LLMs and such will end up being just as dangerous in their misuse by power seekers and profiteers. We’ve seen this coming, and though even the actual people working on them are talking about the dangers, we’re barreling forward without a care.
We’ve been predicting that we won’t be ready for AGI/ASI emergence both in science and in scifi for decades. Still holds true, even as the potential grows. If we’re really lucky, AGI isn’t possible, but I think that just powerful AI tools like LLMs and such will end up being just as dangerous in their misuse by power seekers and profiteers. We’ve seen this coming, and though even the actual people working on them are talking about the dangers, we’re barreling forward without a care.