I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
I expect that uploading human minds is a very tricky problem indeed, and wouldn’t expect that happening in the foreseeable future. However, I do think we may be able to create artificial intelligence on the same principles our brains operate on before that. The key part is that I expect this will happen very quickly in terms of cosmic timescales as opposed to human ones. Even if it takes a century or a millennium do to, that’s a blink of an eye in the grand scheme of things.
I found Culture series was fun, a few other examples I could recommend would be The Lifecycle of Artificial Objects by Ted Chiang, Diaspora by Greg Egan and Life Artificial by David A. Eubanks, and Inverted Frontier by Linda Nagata.
That’s true, on a non human timescale the progress is nearly impossible to predict, especially with novel technology. For example, when space travel was an early concept, we thought travelling the stars was a forgone conclusion. We now know that any exploration in that front will be locked behind either breakthrough science or will be limited to slow generation ships, or robotic exploration.
That a technology capable of producing human level intelligence, or beyond does feel like a certainty since there is no reason to believe that the process of intelligent thought is limited to a biological substrate. We haven’t discovered any fundamental physical laws that stop us from doing this yet. Key issues to solve beyond the hardware problem come into effect with alignment, understanding the key fundamentals of consciousness and intelligence, understanding different types of minds beyond those of humans, and better understandings of emergent phenomena. But these areas will be explored in sufficient detail to yield an answer within time.
I will have to read these other books, I’m definitely interested in picking up some more good books.
I think the alignment question is definitely interesting, since an AI could have very different interests and goals from our own. There was actually a fun article from Ted Chiang on the subject. He points out how corporations can be viewed as a kind of a higher level entity that is an emergent phenomenon that’s greater than the sum of its part. In that sense we can view it as an artificial agent with its own goals which don’t necessarily align with the goals of humanity.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway