Following on from the success of the Steam Deck, Valve is creating its very own ecosystem of products. The Steam Frame, Steam Machine, and Steam Controller are all set to launch in the new year. We’ve tried each of them and here’s what you need to know about each one.

“From the Frame to the Controller to the Machine, we’re a fairly small industrial design team here, and we really made sure it felt like a family of devices, even to the slightest detail,” Clement Gallois, a designer at Valve, tells me during a recent visit to Valve HQ. “How it feels, the buttons, how they react… everything belongs and works together kind of seamlessly.”

For more detail, make sure to check out our in-depth stories linked below:


Steam Frame: Valve’s new wireless VR headset

Steam Machine: Compact living room gaming box

Steam Controller: A controller to replace your mouse


Valve’s official video announcement.


So uh, ahem.

Yes.

Valve can indeed count to three.

  • sp3ctr4l@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    22 hours ago

    Yes, on the Frame, the VR headset, yes, it uses uh… PeX, FeX? Something like that.

    x86/arm translation emulation layer, and then also Proton, a translation layer for windows/etc.

    Interestingly… it seems there may be, or maybe potentially could be in the future… some realtime compute sharing going on, if you have a Machine and a Frame.

    Because… you can Steam Link stream from the Machine to the Frame, or run some games off of the Frame itself, which is roughly a very fancy smartphone, in hardware terms.

    Its via a dedicated WiFi 7 streaming dongle… and Gamers Nexus just said thats roughly a 10ms lag, 20ms worst case scenario.

    So… you could theoretically make a specific like, multithreading mode that takes advantage of a Machine + Frame setup.

    No clue if that actually exists or not, but it does at least seem possible.

    • poVoq@slrpnk.netM
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      No, they could compile Steam for native ARM and launch the x86 games via FEX from within.

      Remains to be seen which way they go, but I see no big reason why they would not do that.

      • sp3ctr4l@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        22 hours ago

        I mean, yeah, but… I’m thinking like a uh, distributed compute type of model, like you see on scalable server rack type deployments for what we used to call supercomputers.

        If the latency is 10ish ms, thats easily low enough that you could say, split off a chunk of the total game render pipeline instruction set, maybe a seperated physics thread, run the whole game from the x86 Steam Machine, use Steam Link as a communication layer, send x86 to the Frame, which then ‘solves’ it via the FEX emu-layer, and then the Frame also doesn’t do any other part of rendering the game, it just accepts player inputs and then recieves graphical render data.

        Physics runs at only 60fps, rest of game runs at 90 or 120 or w/e.

        Steam Machine is the master/coordinator, Frame is the slave/subject, it has various game processes just distinctly dedicated to it’s compute hardware, the Steam Machine is then potentially able to get/use more compute, assuming synchronization is stable, which means overall experienced performance gain, more fps or higher quality settings than just using a Steam Machine.


        They are already kind of doing this via what they are calling Foviated Rendering.

        Basically, the Frame eyetracks you, and it uses that data to prioritize which parts of the overall scene are rendered at what detail level.

        IE, the edges of your vision don’t actually need the same level of render resolution as the center of your vision… because human eyes literally lose vision detail away from the center of their vision.

        So, they already have a built in system that showcases the Frame and the Machine rapidly exhanging fairly low level data, as far as a game render pipeline goes.

        I’m saying, use whatever that transport buffer is, make a mode where you could potentially shunt off more data via that buffer into a distributed, sort of 2 computer version of multithreading.

        How is that really different than a game with a multiplayer client/server model?

        Like, all source games are basically structured as mutliplayer games with the server doing the world, and the client doing the player… when you play a single player source game, you’re basically just running both the client and the server at the same time, on your one machine, without any actual networking.

        • boletus@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          2 hours ago

          10ms is way too slow for multithreading latency in games. Even 1ms for part of a frame is too slow. At 90hz you only have 11ms per frame.

          Though using your pc as a host for a game server that the frame connects to would be possible. Like a multiplayer game. It could help with offloading some CPU heavy elements. I suspect though that most of the difficulty will be rendering, like in the quest.

          • sp3ctr4l@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            Well, like the examples I tried to give… its fairly common in game dev to set aside a distinct physics thread or set of threads, that run at some fixed number, maybe 30 or 60 fps, to avoid the once very common problem of game physics functions going bonkers with unexpected hardware speeds.

            It also helps with just generally normalizing physics behavior in complex vs simple scenes, lets you just have solid bounds for number of whatever kind of dynamic things possible in an area.

            Or, for doing things like deep simulation, where you run like, an aporixmation, a simplification of many npcs, non rendered, but when a player comes into view of one of them, then they transition from approximately simulated, to fully simulated.

            So like, most games just spawn and despawn like crowds of people by more or less a ‘where are you’ ‘what time of day is it’ and ‘what npcs should be here’, and then it just picks blank templates out of a pool.

            But if you wanted a uh, 1000s of npcs doing a daily routine in a city or landscape, potentially having reactions to other npcs they may meet… thats more like deep sim, thats something you can approximate on a slowed down timeframe, thus potentially shunt off to another segment of threads/hardware, more geared toward regular iterations on a massive dataset, than hyperspecific, hyper fresh data.

            Can also be used with something like dynamic, ambient loot or wildlife or fauna generation/simulation.

            Like yeah, I agree that there are mant uses cases where 10ms is still too much latency, but… why not apply ‘foviated rendering’ to something like… procedural terrain generation, in chunks/maps/meshes of descreasing size, increasing detail?

            If the player is mostly stationary, not moving rapidly, just slowly build out more detail ariund them, in a circle.

            If they are moving rapidly, prioritize pre caching various detail levels of chunks that they are rapidly moving toward, stop the above naive, layered detail build out process.

        • entropicdrift@lemmy.sdf.org
          link
          fedilink
          arrow-up
          5
          ·
          17 hours ago

          Basically, the Frame eyetracks you, and it uses that data to prioritize which parts of the overall scene are rendered at what detail level.

          That’s an accurate description of foveated rendering, but what the Frame has is foveated streaming. Foveated rendering needs to be incorporated in the game, typically at the engine level. Foveated streaming can happen at the system-wide level because it’s not reducing the rendering load, but instead reducing the bitrate of the streamed video when you’re streaming wireless VR over wifi from your desktop PC or Steam Machine.

          • sp3ctr4l@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 hours ago

            Ok, yeah, I mixed up Foveated Rendering and Streaming.

            So what I’m trying to say is that Valve has figured out a generalized Foveated Streaming layer that just works on all games, full stop.

            So, you, a game dev… look at that, and build around whatever they’re using in Steam Link to accomplish that… use that as a template/basis to build a sort of dual device , seperated out async threading mode.

            If Steam Link can move data that fast, then build your game/engine from the ground up with that protocol in mind, that you’re always gonna have a second hardware instance that you can run some subportion of game logic / engine calls on, some kind of async multithreading.

            Not just native Foveated Rendering in a VR context, but potentially any kind of game/engine level sort of API or… mode or something, that would allow this superfast Steam Link streaming to have a game be … collaboratively generated by many devices linked with this kind of nearby, wireless speed.


            Maybe a rough analog here would be the difference between a game that is built from the ground up with full Steam Input support, so it gives you more options than if you’re just using Steam as an over the top, generalized input translation layer.

            Of course, … I think basically only Source games actually have full Steam Input support, and I have no idea if this Foveated Streaming thing is… open source or proprietary to Valve.

            If its open, theoretically anyone could try to do this.

            If its proprietary, well then it’d just be Valve.


            Maybe another analogy that comes to mind is a weird, little used way of running ARMA servers.

            So with ARMA games… they’re big, fuckass complex, open world milsims, ARMA literally just is the commericial, streamlined version of the much more hardcore milsims they sell to governments/militaries.

            ARMA servers can be ran in a way where you have more than one server doing the actual computations. This mode, these protocols, they do exist in the commerical variants, but they are not well documented, almost nobody uses it, because its quite complex.

            Players will join the main dedicated server, but the dedicated server can also shunt off tasks like simulating the movement patterns of 100s or 1000s of other NPCs to other, synchronized servers.

            So what you as the player join as the dedi, thats basically just coordinating traffic and world states between players, and also NPCs, who are essentially being simulated on their own servers, which are running in a mode that is optimized for just simulating them.

            This is kinda sorta analagous to what I’m trying to describe with the Frame and Machine, its just thst now you as a player, externally networked or not, now you have a two hardware system combo that is cumulatively producing your whole game experience, which theoretically should be possible if you can low level implement whatever protocol Steam Link is doing to be able to stream data both ways so fast, as they are with Foviated Streaming.

            Right?

            Because that Foveated Streaming has to be hooking into actual hardware inputs from the Frame, the eyetracking, then it bounces that to the Steam Machine, does calcs, throws some result back to the Frame, super fast.

            So, it has some low level of hardware input access, and obviously the Frame alters what the display renders based on what the Machine tells it to, so we have two way, super low latency, local wireless communication going on here.

            From the Gamers Nexus interview, basically, this is only possible because the Frame and Machine are talking via a dedicated WiFi7 dongle, that is soley used by them.


            Anyway, no clue if this would actually make in practical sense in terms of trying to take the time to develop it, what the actual gains or benefits would be… I’m just trying to brainstorm out things that may potentially be neat/possible.