We definitely have a series of breakthroughs needed before I can see any possibility of human consciousness uploads, to say nothing of the resources required to simulate that intelligence. Any simulation of intelligence requires resources, it may be plausible that we can bring the resources required below the resources for keeping a human alive. That being said, I’m not sure it’s the only logical progression of technology.
I’m partial to the concept of artificial realities presented in the “Culture” book series.
In that series, the biological population in the “Culture society” is well educated, truly free and provided anything they could want by purpose built extremely compassionate AI. Then simulated world’s are primarily an afterlife or an alternative to the physical world.
They also had artificial intelligence and uploaded biological intelligence interact with the physical world through robotic presences.
There were some interesting concepts that came out of that, like highly religious societies producing horrific “Hell” afterlife when they realized that metaphysical afterlifes were not experimentally verifiable.
I had issues with some of the takes of the author, but it was an interesting read.
I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
We definitely have a series of breakthroughs needed before I can see any possibility of human consciousness uploads, to say nothing of the resources required to simulate that intelligence. Any simulation of intelligence requires resources, it may be plausible that we can bring the resources required below the resources for keeping a human alive. That being said, I’m not sure it’s the only logical progression of technology.
I’m partial to the concept of artificial realities presented in the “Culture” book series.
In that series, the biological population in the “Culture society” is well educated, truly free and provided anything they could want by purpose built extremely compassionate AI. Then simulated world’s are primarily an afterlife or an alternative to the physical world.
They also had artificial intelligence and uploaded biological intelligence interact with the physical world through robotic presences.
There were some interesting concepts that came out of that, like highly religious societies producing horrific “Hell” afterlife when they realized that metaphysical afterlifes were not experimentally verifiable.
I had issues with some of the takes of the author, but it was an interesting read.
I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway