• 0 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • That’s just pointing out upgrades carry a large price, not that the base model is at a loss.

    Which is a super common strategy in pre built, especially in systems that can’t in theory take third party upgrades. Commonly a mobile platform will charge a hundred dollar premium for like 20 dollars worth of UFS storage. At least at some points PC vendors have done DIMM SPD lockouts to force customers to first party so they can charge a significant multiple of market rate for their parts.

    I doubt anything in Apple’s lineup is sold at a loss. They might tolerate slimmer margins on entry, but I just don’t think they go negative.



  • I think that was overstated. Sure there were some “fun” projects for fun or publicity.

    However supercomputer clusters require higher performance interconnect than PS3 could do. At that time it would have been DDR infiniband (about 20 Gbps) or 10 g myrinet.

    Sure gigabit was prevalent, but generally at places that would also have little tolerance for something as “weird” as the cell processor.

    OtherOS was squashed out of fear of the larger jailbreak surface.



  • Well, there only so much in gaming that reasonably can be done server side.

    Sure, the server could identify that a player shouldn’t be visible and not transit that location to a client, addressing seeing through walls, in theory.

    But once a player is hypothetically visible, aimbot can happen. If you are crawling in a ghillie suit in the grass, but the other player has a client that skips rendering grass and replaces the ghillie suit model with a suit made of traffic cones…

    Now intrusive anti cheat isn’t worth it, but it is an unavoidable reality that it is up to the client to preserve the integrity.

    Closest you get would be streamed gameplay, where the rendering even is server side. Also not worth it. But even then I could see cheating machine vision and faked controls to get an edge unfairly.






  • Pretty spot on, it was so worth it to remember, that Valve actually seemed to remember.

    Their first go at it was “make a viable platform and the developers/publishers will make the effort to come over, and hardware partners will step up with offerings because of Valve’s brand strength and fear of the Microsoft Store screwing everything up”. That didn’t work, and Microsoft Store also didn’t pan out as far as Valve and others feared, but they have been kind of screwing up the platform particularly for games as they chase other things that would be subscription revenue instead of transactional revenue.

    Valve learned they needed to work harder to bring the platform to the Windows games, so heavy investment in Proton. They learned that they had to take the hardware platform in their own hands because the OEMs aren’t committed until they see proof it can work for them. They learned that the best way to package their improved efforts was with a “hook” with mass-market appeal, enter the Steam Deck, recognizing the popularity of the Switch form factor and bringing it to the PC market at a time no one else was bothering.

    So now they have a non-Android, non-Windows ecosystem that covers handheld, console/desk, and VR with a compelling library of thousands and thousands of games…


  • This is more thinking about material cost rather than relative value. If you save money on the passthrough and incur a few costs above the Quest 3 but nothing dramatic, then I’m just saying the pricing needs to be in the ballpark of Quest 3. Better value by making smarter choices that may not have a cost impact (e.g. using a maintstream high end SoC instead of a niche SoC, putting the battery at the back instead of making it front heavy).

    Of course they may be hampered by different business needs. Meta affording to risk more money than Valve can risk might drive higher price point, but it would be unfortunate.


  • The SoC may be better, but I don’t know that it would be more expensive. Meta went with a more niche SoC and Valve selected a more mainstream, newer SoC. Better specs, but also larger volumes so cost wise I think Valve should be fine. Comfort certainly seems like it should be better, but I don’t know that I see more cost as a factor versus just making better decisions.

    The wireless dongle certainly can be a thing in it’s favor, just thinking that on balance there’s some things that should contribute to BOM price and some that should save on BOM price and it should, roughly, be in the ballpark of Quest 3 when all is said and done, not 2x the cost.


  • Well even with your observation, it could well be losing share to Mac and Linux. The Windows users are more likely to jump ship, and Mac and Linux users tend to stick with the platform more, mainly because it’s not actively working to piss them off. Even if zero jump to Mac or Linux, the share could still shift.

    The upside of ‘just a machine to run a browser’ is that it’s easier than ever to live with Linux desktop, since that nagging application or two that keeps you on Windows has likely moved to browser hosted anyway. Downside of course being that it’s much more likely that app extracts a monthly fee from you instead of ‘just buying it’.

    Currently for work I’m all Linux, precisely because work was forced to buy Office365 anyway, and the web versions work almost as well as the desktop versions for my purposes (I did have to boot Windows because I had to work on a Presentation and the weird ass “master slide” needed to be edited, and for whatever reason that is not allowed on the web). VSCode natively supports linux (well ‘native’, it’s a browser app disguised as a desktop app), but I would generally prefer Kate anyway (except work is now tracking our Github Copilot usage, and so I have to let Copilot throw suggestions at me to discard in VSCode or else get punished for failing to meet stupid objectives).


  • “Agentic” is the buzzword to distinguish “LLM will tell you how to do it” versus “LLM will just execute the commands it thinks are right”.

    Particularly if a process is GUI driven, Agentic is seen as a more theoretically useful approach since a LLM ‘how-to’ would still be tedious to walk through yourself.

    Given how LLM usually mis-predicts and doesn’t do what I want, I’m no where near the point where I’d trust “Agentic” approaches. Hypothetically if it could be constrained to a domain where it can’t do anything that can’t trivially be undone, maybe, but given for example a recent VS Code issue where it turned out the “jail” placed around Agentic operations turned out to be ineffective, I’m not thinking too much of such claimed mitigations.


  • My career is supporting business Linux users, and to be honest I can see why people might be reluctant to take on the Linux users.

    “Hey, we implemented a standard partition scheme that allocates almost all our space to /usr and /var, your installer using ‘/opt’ doesn’t give us room to work with” versus “Hey, your software went into /usr/local, but clearly the Linux filesystem standard is for such software to go into /opt”. Good news is that Linux is flexible and sometimes you can point out “you can bind mount /opt to whatever you want” but then some of them will counter “that sounds like too much of a hack, change it the way we want”. Now this example by itself is mostly simple enough, make this facet configurable. But rinse and repeat for just an insane amount of possible choices. Another group at my company supports Linux, but just as a whole virtual machine provided by the company, the user doesn’t get to pick the distribution or even access bash on the thing, because they hate the concept of trying to support linux users.

    Extra challenge, supporting an open source project with the Linux community. “I rewrote your database backend to force all reads to be aligned at 16k boundaries because I made a RAID of 4k disks and think 16k alignment would work really well with my storage setup, but ended up cramming up to 16k of garbage into some results and I’m going to complain about the data corruption and you won’t know about my modification until we screen share and you try to trace and see some seeks that don’t make sense”.





  • I think a key difference is that firefox is a eternally evolving codebase that has to do new stuff frequently. It may have been painful but it’s worth it to bite the bullet for the sake of the large volume of ongoing changes.

    For sudo/coreutils, I feel like those projects are more ‘settled’ and unlikely to need a lot of ongoing work, so the risk/benefit analysis cuts a different way.


  • It’s more like saying “why tear down that house and try to build one just like it in the same spot?”

    So the conversation goes:

    “when it was first built, it had asbestos and lead paint and all sorts of things we wouldn’t do today”

    “but all that was already fixed 20 years ago, there’s nothing about it’s construction that’s really known to be problematic anymore”

    “But maybe one day they’ll decide copper plumbing is bad for you, and boy it’ll be great that it was rebuilt with polybutylene plumbing!”

    Then after the house is built it turns out that actually polybutylene was a problem, and copper was just fine".