• Shelena
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I agree we need a definition. But there always has been disagreement about what definition should be used (as is the case with almost anything in most fields of science). There traditionally have been four types of definitions of (artificial) intelligence, if I remember correctly they are: thinking like a human, thinking rationally, behaving like a human, behaving rationally. I remember having to write an essay for my studies about it and ending it with saying that we should not aim to create AI that thinks like a human, because there are more fun ways to create new humans. ;-)

    I think the new LLMs will pass most forms of the Turing test and are thus able to behave like a human. According to Turing, we should therefore assume that they are conscious, as we do the same for humans, based on their behaviour. And I think he has a point from a rational point of view, although it seems very counterintuitive to give ChatGPT rights.

    I think the definitions fitting in the category of behaving rationally always had the largest following, as it allows for rationality that is different from human’s. And then, of course, rationality often is ill-defined. I am not sure whether the goal posts have been changed as this was the dominant idea for a long time.

    There used to be a lot of discussion about whether we should focus on developing weak AI (narrow, performance on a single or few tasks) or strong AI (broad, performance on a wide range of tasks). I think right now, the focus is mainly on strong AI and it has been renamed to Artificial General Intelligence.

    Scientists, and everyone else, have always been bad at predicting what will happen in the future. In addition, disagreement about what will be possible and when always has been at the center of the discussions in the field. However, if you look at the dominant ideas of what AI can do and in what time frame, it is not always the case that researchers underestimate developments. I started studying AI in 2006 (I feel really old now) and based on my experience, I agree with you the the technological developments often are underestimated. However, the impact of AI on society seems to be continuously overestimated.

    I remember that at the beginning of my studies there was a lot of talk about automated reasoning systems being able to do diagnosis better than doctors and therefore that they would replace them. Doctors would have only a very minor role as a human would need to take responsibility, but that was that. When I go to my doctor, that still has not happened. This is just an example. But the benefits and dangers of AI have been discussed from the beginning of the field and what you see in practice is that the role of AI has grown, but is still much, much smaller than in practice.

    I think the liquid neural networks are very neat and useful. However, they are still neural networks. It is still an adaptation of the same technology, with the same issues. I mean, you can get an image recognition system off the rails by just showing an image with a few specific pixels changed. The issue is that it is purely pattern-based. These lack an basic understanding of concepts that humans have. This type of understanding is closer to what is developed in the field of symbolic AI, which has really fallen out of fashion. However, if we could combine them, we could really make some new advancements, I believe. Not just adaptations of what we already have, but a new type of system that really can go beyond what LLMs do right now. Attempts to do so have been made, but they have not been really successful. If this happens and the results are as big as I expect, maybe I will start to worry.

    As for the rights of AI, I believe that researchers and other developers of AI should be very vocal about this, to make sure the public understands this. This might put pressure on the people in power. It might help if people experience behaviour of AI that suggests consciousness, or even if we let AI speak for itself.

    We should not just try to control the AI. I mean, if you have a child, you do not teach it how to become a good human by just controlling it all the time. It will not learn to control itself and it will likely follow your example of being controlling. We will need to be kind to it, to teach it kindness. We need to be the same towards the AI, I believe. And just like a child that does not have emotions might behave like a psychopath, AI without emotions might as well. So we need to find a way to make it have emotions as well. There has been some work on that also, but also very limited.

    I think the focus is still too much only on ML for AGI to be created.