e

  • 2 Posts
  • 55 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle

  • Yeah, probably the main reason it’s getting the little bit of praise that it does is that they’re showing it off on games with fairly flat-looking skin shaders. Unfortunately a problem with this sort of thing is that getting that “2023” image is the result of giving a whole team a huge amount of time to model one man’s face. If you’re Bethesda and you just want to get NPCs into Starfield, it would be a similar amount of work. A bit less, since the first people already gave a talk on it, but still much more work then just getting a diffuse BRDF with some subsurface scattering and calling it good. But you also need a process that can be applied to every single NPC…

    And looking at Striking Distance Studios, the company where that 2023 image is from:

    In February 2025, it was reported that most of the studio’s developers had been laid off.

    Yeah, I think it’s safe to say that the work those people put in will never be directly reused.

    Another reason the DLSS version looks a bit more realistic there is because of the specular highlights on the eyes, for example. They probably aren’t reflecting anything real, or else they would be there in the original. But the AI knows that specular highlights add realism and are plausible in this scene, so it puts them there. That’s something that an artist could do if given a specific shot and camera angle, but in the general case they can’t really do that without causing problems.





  • Sure, I could definitely see situations where it would be useful, but I’m fairly confident that no current games are doing that. First of all, it is a whole lot easier said than done to get real-world data for that type of thing. Even if you manage to find a dataset with positions of various features across various biomes and train an AI model on that, in 99% of cases it will still take a whole lot more development time and probably be a whole lot less flexible than manually setting up rulesets, blending different noise maps, having artists scatter objects in an area, etc. It will probably also have problems generating unusual terrain types, which is a problem if the game is set in a fantasy world with terrain that is unlike what you would find in the real world. So then, you’d need artists to come up with a whole lot of datat to train the model with, when they could just be making the terrain directly. I’m sure Google DeepMind or Meta AI whatever or some team of university researchers could come up with a way to do ai terrain generation very well, but game studios are not typically connected to those sorts of people, even if they technically are under the same company of Microsoft or Meta.

    You can get very far with conventional procedural generation techniques, hydraulic erosion, climate simulation, maybe even a model of an ecosystem. And all of those things together would probably still be much more approvable for a game studio than some sort of machine learning landscape prediction.



  • AdrianTheFrog@lemmy.worldtoOpen Source@lemmy.ml...
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    Are there any alternatives that are decently fast for large files? My computer and my phone both get at least 300 mbps from the router, and I have yet to find a local file transfer application that will be anywhere near that fast for large files (destiny, local send, kde connect, might have tried others, I don’t remember)










  • I don’t really stay on top of my gmail that often, but my spam folder has basically exactly the same stuff in it that my inbox has. Just a bunch of random emails from services that I signed up for an account on or bought something from and none of which I particularly care about. There’s not really much that I can tell differentiating what gets marked as spam or not either.



  • I know that camera hardware does not return hdr values. So something in the actual conversion from/in the sensor (idk how cmos sensors work) would have to be affected by the white balance for changing it in the camera software to do lose a significant amount more information than changing it after the picture was taken. Unless the conversion from a raw image also is a factor, but raw images aren’t hdr either so I don’t really see how that could cause much significant difference.

    If the white balance only dims colors and doesn’t brighten them then it couldn’t possibly clip anything and would have the same effect as lowering the exposure originally (with the new white balance) to avoid a clipped highlight.

    I’m not a photography guy (just a computer graphics guy) so idk what the software usually does (I suspect it would avoid clipping? You could also brighten something with a gamma curve for example to prevent clipping…) but I can’t find anything online about sensors having hardware support for white balance adjustment.