For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

  • Borg286@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    AI doesn’t really exist yet. Media, back in 1870, called Tesla’s magnetically controlled boat artificial intelligence, and again in the 80s when computer scientists invented the game of life. But even now nothing we’ve made so far can do decision making. ChatGPT, the smartest out there, is really just a versatile prediction engine.

    Imagine if I said, “once upon a” and asked you to come up with the next word, you’d say, “time” as you’ve heard that phrase hundreds of times. I then asked you to come up with the next word, and the next you might start telling me about a princess locked in a tall tower protected by a dragon. These are all stereotypical elements of a “once upon a time” story. Nothing creative, just typical. Chat GPT has just read way more than you or I ever could and is really good at knowing more stereotypical stories and mixing them together. There is no “what is best for humanity” only “once upon a time…”-made up stories.

    • RupeThereItIs@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      What your saying doesn’t exist is an Artificial General Intelligence, something approaching the conscious human mind. Your right that doesn’t exist.

      AI doesn’t just mean that though.

      What we’re dealing with right now is the computer equivalent of growing mouse brain cells in a petre dish, plugging them into inputs and outputs & getting them to do useful things for us.

      The way you describe chat GPT not being creative, is also theoretically how our own brains work in the creative processes. If you study story structure & mythology you’d find that ALL successful stories boil down to a very minimalist set of archetypes & types of conflict.

      • Kichae@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.

        And that intelligence lies outside of the machine.

        There’s really no need to buy into tech bros delusions of grandeur about this stuff.

  • FerrahWolfeh@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    It really doesn’t. In simple terms, AI will only avoid talking about certain subjects because the data they used to teach the AI says it’s bad and shows how the AI should act accordingly to the scenarios provided in that data.

      • nLuLukna @sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Well you do the same don’t you. You know not to scream loudly in public because the data that you reviecied when you were younger tells you that it’s a mistake.

        • TimeSquirrel@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          This is what I find funny about this thread. People are trying so hard to justify it NOT being AI by breaking its actions down like this, while forgetting that WE learn the exact same way.

          You could even say that WE aren’t even making conscious decisions. Every decision we make is weighed against past experiences and other stimuli. “Consciousness” is the brain lying to itself to make it seem like it has free will.

          • PetePie@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m perplexed why majority of programmers on social medias share the same opinion about AIs which is opposite to what all AI researchers, scientists and top AI engineers believe, not only they seem to think that they know how LLM think but they also know exactly what consciousness is.

  • Otome-chan@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    AI currently doesn’t “understand” or “know” anything. It’s trained on a collection of text, and then predicts and extends the text prompt you give it. It’s very good at doing this. If someone “creates something new” the trained AI will have no concept of it, unless you train a new ai model that includes text about that thing.

    • s804@kbin.socialOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Oh wow it is really interesting that new things will be unknown! So basically AI still isn’t intelligence because it can’t really make choices on its own, just based on what it has learned.