• Steve@communick.news
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    7 hours ago

    DLSS uses AI tricks to improve framerate.

    Previous versions used AI upscaling and frame generation.

    This version now uses AI filters. It changes lighting and texture effects to make the image “better”, instead of just making the image faster.
    It’s actively manipulating creative decisions about the style of a game. Which naturally upsets many developers and players alike.

    • SCmSTR@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Yes. Adding on… It renders whatever game at a much lower resolution (to make game run better and save on resources), then uses phycology tricks to postprocess and overlay specific little things to make it seem not as low of resolution.

      Nslopia decided that since they were using “Deep Learning” to “Super Sample” and do what I said above, they figured, why not use “Deep Learning” to make the image look “better”, too?

      Putting it overly simply (ELi5)…

      DLSS uses machine learning to do two things:

      1. figure out, in real time, what’s going on on-screen
      2. change it in some way

      All that DLSS 5 changes is to use real life professional photography in stage 2, instead of trying to be subtle and use existing higher resolution videogame images/look. It’s an interesting experiment just for fun, but some disrespectful, greedy, clueless simpleton MBA with bad taste obviously saw it and thought “oh yes”.

    • Epzillon@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      5 hours ago

      Good explanation. For a bit more context:

      AFAIK DLSS was the first “AI technology” to hit the gaming market when NVIDIA revealed it a few GPU generations back. Unsure if it came with the RTX 2000 series or later but somewhere along those lines. DLSS consisted of 2 major technologies, upscaling and frame generation.

      Upscaling uses an AI model to take a lower resolution image (frame) and upscale it to a higher resolution, this cutting the required performance to rendering the frame to begin with. This improves framerate at the cost of bluriness in fast moving scenens and some ghosting artrefacts. This has been improved across the different DLSS versions but the issue still remains, although way less notable.

      Frame generation on the other hand generates “fake” frames in between naturally generated frames to improve framerate and produce a more fluid experience. As the model needs extra time to produce this extra frame it also leads to some latency being added resulting in input delay, which is not ideal for fast-paced competitive games.

      Both of these technologies affects the visuals of the game somewhat but still retains the original artists vision. Although it does influence the visual quality and brings some side effects such as smearing, ghosting and artefacts.

      The new DLSS 5 debacle is a fresh can of worms. While previous DLSS versions have focused on retaining the original image and only improving performance this version now heavily influences the content of whatever is being rendered. The AI now alters the lighting, textures, geometry, etc in the frame and applies the most glorious of AI slop you could imagine, removing any sense of artistic direction and atmosphere from the game.

      NVIDIA is attempting to damage control the situation by claiming that developers have full control over what is being touched by the AI model and what isnt. But the reality remains that this does influence the scene content with unpredictable AI generated content, thus simply removing artistic ability and feeling from games.

      But we do not need to worry, NVIDIA CEO Jensen Huang himself has told us that “we are wrong” and that this is simply amazing so as every good citizen under a capitalist technocratic ogligarchy i believe we should just sit bow down and praise DLSS 5 as the godly diety it is. /s