e

  • 1 Post
  • 111 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle

  • As an amateur computer graphics person, the best way to draw accurate stars is to just pre render it onto a cubemap. But if you really need that subpixel worth of parallax to be completely accurate for every star, there are a couple ways I can think of off of the top of my head. With any you’d want to make sure you only store position, size, and color, since stars are all spheres anyways. With effort, you can be very flexible with how these are stored. (4 bits color temperature, 4 bits size, 3*32 bits coordinates maybe)

    • splat each star into the screen texture with atomics
    • some sort of tiled software rasterization thing, like in Gaussian Splatting

    Worse ideas:

    • instanced hardware rasterization
    • ray tracing

    This is not that well suited to most usual rendering techniques, because most stars are probably going to be much smaller than a pixel. Ray tracing would mean you need to just hit every star by chance (or artificially increase star size and then deal with having tons of transparency), hardware rasterization is basically the same and additionally is inefficient with small triangles. I guess you could just live with only hitting stars by chance and throw TAA at it, there’s enough stars that it doesn’t matter if you miss some. That would react badly to parallax though and defeats the purpose of rendering every star in the first place.

    It’s much more efficient to do a manual splatting thing, where for each star you look at what pixel(s) it will be in. You can also group stars together to cull out of view stars more efficiently. Subpixel occlusion will be wrong, but it probably doesn’t matter.

    This is all just for the viewport, though. Presumably there are other objects in the game besides stars, which need to have reflections on them of the stars. Then that becomes an entirely different problem.

    The real answer though is that you wouldn’t try to render all of the stars, even if you want parallax. Maybe some of the closer and larger ones as actual geometry, simplify a ton of stuff in the background, render things as volumes or 2d billboards, have a cubemap for the far distance, etc

    Edit: also ofc this presumes you know the position, scale, temperature of every star

    I also like the idea of baking all of the stars into a volume in spherical coordinates, centered around the origin


  • It’s clear that several people in charge of the youtube livestream have no idea about how to do that correctly. I think the difference is just effort. Viewership was tiny compared to Apollo 11, as was the hype leading up to it. It’s clear that NASA could provide a whole lot better footage if even some random youtuber (Everyday Astronaut) can beat them. So that aspect is, as you said, because as a society we don’t really care about the Artemis launch. SpaceX does put a fair amount of effort into their livestreams, and you can easily tell by watching them.

    For the recorded footage, film often has a lot higher dynamic range than digital cameras and usually looks a whole lot better when recording a launch up close.

    Far shots are limited by atmospheric distortion and physical limits from diffraction for a given aperture size. None of that can change.

    IDK anything about the quality of the original live broadcast of Apollo 11, so i don’t have anything to compare in that regard







  • Yeah, probably the main reason it’s getting the little bit of praise that it does is that they’re showing it off on games with fairly flat-looking skin shaders. Unfortunately a problem with this sort of thing is that getting that “2023” image is the result of giving a whole team a huge amount of time to model one man’s face. If you’re Bethesda and you just want to get NPCs into Starfield, it would be a similar amount of work. A bit less, since the first people already gave a talk on it, but still much more work then just getting a diffuse BRDF with some subsurface scattering and calling it good. But you also need a process that can be applied to every single NPC…

    And looking at Striking Distance Studios, the company where that 2023 image is from:

    In February 2025, it was reported that most of the studio’s developers had been laid off.

    Yeah, I think it’s safe to say that the work those people put in will never be directly reused.

    Another reason the DLSS version looks a bit more realistic there is because of the specular highlights on the eyes, for example. They probably aren’t reflecting anything real, or else they would be there in the original. But the AI knows that specular highlights add realism and are plausible in this scene, so it puts them there. That’s something that an artist could do if given a specific shot and camera angle, but in the general case they can’t really do that without causing problems.



  • I think with the straight/gay labels, you’re not going to be not attracted to someone just because they say that they’re a guy or girl. So really there’s just some appearances that you find attractive, and some not. For most people, those line up pretty well with femininity and masculinity, with maybe a few other restrictions on top. Any label is going to be a simplification, you can’t describe with one word the whole range of people you are attracted to.