Via this interview with Rodney (A GPT skeptic according to GPT) Brooks Just Calm Down About GPT-4 Already I found this old (at least in internet terms, it is pre plague, during the first Trump era) blog post which some of you might find interesting to read.

It points out a few flaws with AI hype reasoning which has relevance for our sneering on the LW AGI hype.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    This is a great article, not only as a primer on the ways in which we think about the future of technology are flawed, but also as a nuanced approach to speculation that is pessimistic but not doomful (oops just invented a word). Thanks for finding and posting.

  • locallynonlinear@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 months ago

    It’s a good interview, and I really like putting economics here in perspective. If I could pour water on AI hype in a succinct way, I’d say this: capability is again, not the fundamental issue in nature. Open system economics, are.

    There are no known problems that can’t theoritically be solved, in a sort of pedantic “in a closed system information always converges” sort of way. And there numerous great ways of making such convergence efficient with respect to time, including who knew, associative memory. But what does it, mean? This isn’t the story of LLMs or robotics or AI take off general. The real story is the economics of electronics.

    Paradoxically, just as electronics is hitting its stride in terms of economics, so are the basic infrastructural economics of the entire system becoming strained. For all the exponential growth in one domain, so too has been the exponential costs in other. Such is ecosystems and open system dynamics.

    I do think that there is a future of more AI. I do think there is a world of more electronics. But I don’t claim to predict any specifics beyond that. Sitting in the uncertainty of the future is the hardest thing to do, but it’s the most honest.

    • YouKnowWhoTheFuckIAM@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 months ago

      There are no known problems that can’t theoritically be solved, in a sort of pedantic “in a closed system information always converges” sort of way

      Perhaps. The problem of human flight was “solved” by the development of large, unwieldy machines driven by (relatively speaking, cf. pigeons) highly inefficient propulsion systems which are very good at covering long distances, oceans, and rough terrain quickly - the aim was Daedalus and Icarus, but aerospace companies are fortunate that the flying machine turned out to have advantages in strictly commercial and military use. It’s completely undecided physically whether there is a solution to the problem of building human-like intelligence which does a comparable job to having sex, even with complete information about the workings of humans.

      • locallynonlinear@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Yes, and ultimately this question, of what gets built, as opposed to what is knowable, is an economics question. The energy gradients available to a bird are qualitatively different than those available to industry, or individual humans. Of course they are!

        There’s no theoritical limit to how close an universal function approximator can get to a closed system definition of something. Bird’s flight isn’t magic, or unknowable, or non reproduceable. If it was, we’d have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

        But in the end, it’s not just the knowledge of a thing that matters. It’s the whole economics of that thing embedded in its environment.

        I guess I violently agree with the observation, but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either. I feel it can be just as reductionist in the end to demand there is no solution than to say that any solution has its trade offs and costs.

        • YouKnowWhoTheFuckIAM@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 months ago

          While I agree with you about the economics, I’m trying to point out that physical reality also has constraints other than economic, many of them unknown, some of them discovered in the process of development.

          Bird’s flight isn’t magic, or unknowable, or non reproduceable.

          No. But it is unreproducible if you already have arms with shoulders, elbows, hands, and five stubby fingers. Human and bird bodies are sufficiently different that there are no close approximations for humans which will reproduce flight for humans as it is found in birds.

          If it was, we’d have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

          To me, this is a series of non-sequiturs. It’s obvious that you can have awe for something without having a genuine understanding of it, but that’s beside the point. Similarly, the kind of knowledge required for humans to communicate with one another isn’t relevant - what we want to know is the kind of knowledge which goes into the physical task of making artificial humans. And you ride roughshod of one of the most interesting aspects of the human experience: human communication and mutual understanding is possible across vast gulfs of the unknown, which is itself rather beautiful.

          But again I can’t work out what makes that particularly relevant. I think there’s a clue here though:

          …but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either.

          Right, but this would be a common (and mistaken) move some people make which I’m not making, and which I have no desire to make. You’re replying here to people who affirm either an implicit or explicit dualism about human consciousness, and say that the answers to some questions are just out of reach forever. I’m not one of those people, and I’m referring specifically to the words I used to make the point that I made, namely that there exist real physical constraints repeatedly approached and arrived at in the history of technology which demonstrate that not every problem has an ideal solution (and I refer you back to my earlier point about aircraft to show how that cashes out in practice).

          • locallynonlinear@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            6 months ago

            For what it’s worth then, I don’t think we’re in disagreement, so I just want to clarify a couple of things.

            When I say open system economics, I mean from an ecological point of view, not just the pay dollars for product point of view. Strictly speaking, there is some theoritical price and a process, however gruesome, that could force a human into the embodiment of a bird. But from an ecosystems point of view, it begs the obvious question; why? Maybe there is an answer to why that would happen, but it’s not a question of knowledge of a thing, or even the process of doing it, it’s the economic question in the whole.

            The same thing applies to human intelligence, however we plan to define it. Nature is already full of systems that have memory, that can abstract, reason, that can use tools, that are social, that are robust in the face of novel environments. We are unique but not due to any particular capability, we’re unique because of the economics and our relationship with all the other things we depend upon. I think that’s awesome!

            I only made my comment to caution though, because yes, I do think that overall people still put humanity and our intelligence on a pedestal, and I think that plays to rationalist hands. I love being human and the human experience. I also love being alive, and part of nature, and the experience of the ecosystem as a whole. From that perspective, it would be hard for me to believe that any particulart part of human intelligence can’t be reproduced with technology, because to me it’s already abundant in nature. The question for me, and our ecosystem at large, is when it does occur,

            what’s the cost? What role, will it have? What regulations, does it warrant? What, other behaviors will it exhibit? And also, I’m ok not being in control of those answers. I can just live, in a certain degree of uncertainty.