• daniyeg [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    1
    ·
    2 months ago

    This is “GameNGen” (pronounced “game engine”), and is the work of researchers from Google, DeepMind, and Tel Aviv University.

    of course this shit came out of israel lmao.

  • BeamBrain [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    71
    ·
    edit-2
    2 months ago

    Saw the videos of it in action, and the best way I can describe it is “what it’s like to play Doom in a dream.” The graphics are often fuzzy. The health and ammo counters go weird. Enemies move slowly and fade in and out of existence. Exploded slime barrels respawn when the player’s not looking. Very weird and surreal.

    I could see using this tech’s limitations to its advantage, creating strange and uncanny experiences along the lines of LSD: Dream Emulator, if it weren’t so awful for the environment.

    EDIT: It was posted to r/singularity lmao

      • btfod [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        23
        ·
        2 months ago

        There has to be a way to pipe image frames into a music viz like Milkdrop. Replace the waveform with the output of the game engine but keep all the trippy distortion effects.

    • UlyssesT [he/him]@hexbear.net
      cake
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 months ago

      EDIT: It was posted to r/singularity lmao

      Those bazinga rubes keep reading the tea leaves, watching the birds in the sky, and throwing the bones waiting for a sign of the imminent coming of the robot god.

      They’ve been doing this at least since their predecessors called themselves “Extropians” in the 90s.

  • yoink [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    57
    ·
    2 months ago

    i saw this tweet yesterday and as a game dev and designer it legit made me mad

    it’s the same shit AI bros do every time - they have to dumb down the meaning of the thing they’re trying to imitate poorly, because they have to shift the goal posts to pretend that what they do has any legitimacy

    this is in no way a game engine, and trying to pretend it is requires abstracting and dumbing down entire fields of study these nerds have never even dipped their toes into

    • FunkyStuff [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      2 months ago

      I think they’re aware of how absurd their claims are, but they think that the most important thing is the “potential” for this technology to generalize and eventually come to achieve what they’re promising now. The issue is they’d essentially need AGI to take it from this to a real game engine, and that’s obviously not happening.

      • UlyssesT [he/him]@hexbear.net
        cake
        link
        fedilink
        English
        arrow-up
        29
        ·
        2 months ago

        The issue is they’d essentially need AGI to take it from this to a real game engine, and that’s obviously not happening.

        They can market it as “one step away from AGI” basically forever, especially with bullshit claims already established like “world simulation.”

      • DamarcusArt@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        Yep. Like any good grift, it isn’t about what it can actually do, but what it could hypothetically do if enough people “invest” in it.

  • UlyssesT [he/him]@hexbear.net
    cake
    link
    fedilink
    English
    arrow-up
    47
    ·
    2 months ago

    You can have a planet burning treat printer do this at a massive energy cost or you can pay workers to do it for cheaper and with a lot less harm done. Which way, techbros?

  • laziestflagellant [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    46
    ·
    2 months ago

    I think the funniest part of this is that it still needs an extant game to be trained on and the end result still has no awareness or way of tracking your surroundings.

    You literally already have a game that works but instead you want to strap two 4090’s together to play a worse version of that same game with no level design and enemies disappear if you turn around fast enough (the ‘engine’ will quite literally forget about them if they’re off screen)

    • UlyssesT [he/him]@hexbear.net
      cake
      link
      fedilink
      English
      arrow-up
      28
      ·
      2 months ago

      Like so much bazinga “innovation” it does something that was already done before, but worse and more expensive (and more costly to the planet).

  • UlyssesT [he/him]@hexbear.net
    cake
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 months ago

    a type of world simulation

    spray-bottle stop trying to pretend every fucking technotreat is just a step away from “singularity” nerd rapture.

  • axont [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    2 months ago

    wow this AI is so great it can very, very poorly imitate images from a 31 year old video game that’s specifically well known for its versatility and ease to port

    maybe in a few years AI can catch up to the gaming abilities of a refrigerator

  • varmint [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 months ago

    The video has the player being as slow and careful as possible, while keeping the rooms well framed at all times. In the last second of the video the player looks at a wall and then looks away, and they’ve transported to somewhere entirely different.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 months ago

      In the last second of the video the player looks at a wall and then looks away, and they’ve transported to somewhere entirely different.

      This is showing us how toddlers see the world. The model currently lacks object permanence. Everything outside its current field of view stops existing. When asked to redraw, it has to start from scratch. Everything in its world is ephemeral, floating around haphazardly. It has no hard ground to fall on and rise up from.

      This model is interacting with its world as an 8-18 month toddler would. Instead of pointing it at Doom, I’d like to know what it does with an actual camera.

      Point that generative model back at itself and give it access to a real-world against which to compare its predictions.

  • FortifiedAttack [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    31
    ·
    2 months ago

    I’m not sure why this person thinks its impressive to have an AI accurately predict the layout of one of the most commonly played Doom levels? If I remember correctly, this map even has a demo playing on startup.

    Like what’s the point of this? Being able to exactly recreate the data it was trained on isn’t even an achievement in ML, it’s just called “overfitting”.

  • KnilAdlez [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 months ago

    Diffusion models aren’t terribly power hungry compared to something like chatGPT*, but it’s weird and honestly worthless idea in the first place, so please shit on it.

    *Technically you can have a gpt text embedding to drive a diffusion model so they can be just as bad, but it isn’t necessarily always the case.