

Would be interesting to see the stats for revenue by game, price by volume. If someone charges 300 for a game that no one bought. Then it shouldn’t count, hypothetically.


Would be interesting to see the stats for revenue by game, price by volume. If someone charges 300 for a game that no one bought. Then it shouldn’t count, hypothetically.


While true, the issue is that Debian release cadence is such that they will always be “behind” kernel and wine wise.
Also they are more purist and less likely to facilitate proprietary bits. Last time I tried wine a lot of apps didn’t work because they had no work to enable non-free fints So they may have the same general packaging strategy, but the vintage of content and scope are distinctly different from more aggressive distributions.


What tax prep software do you use? I thought they had all gone to just webapps at this point …


This all presumes that OpenAI can get there and further is exclusively in a position to get there.
Most experts I’ve seen don’t see a logical connection between LLM and AGI. OpenAI has all their eggs in that basket.
To the extent LLM are useful, OpenAI arguably isn’t even the best at it. Anthropic tends to make it more useful than OpenAI and now Google’s is outperforming it on relatively pointless benchmarks that were the bragging point of OpenAI. They aren’t the best, most useful, or cheapest. The were first, but that first mover advantage hardly matters when you get passed.
Maybe if they were demonstrating advanced robotics control, but other companies are mostly showing that whole OpenAI remains “just a chatbot”, with more useful usage of their services going through third parties that tend to be LLM agnostic, and increasingly I see people select non OpenAI models as their preference.
Fun story, my car had a recall for the brake light coming on randomly. After they replaced the part, then the brake light wouldn’t come on at all. Then they made it so the brake light would only sometimes come on. I said screw it and finally fixed it myself. The pedal pushed down on two different things, one to actually operate the brakes, and a separate little button for the electronic brake indication for the lights and for the cruise control to disengage (the cruise control also stayed active even when hitting the brake pedal).
Anyway, they screwed up setting the electronic button and I had to position it correctly in the little bracket, where it gets pressed if the brake pedal barely moves even if it takes a smidge of actual distance to start the real braking.


Yeah, but in relatively small volumes and mostly as a ‘gimmick’.
The Cell processors were ‘neat’ but enough of a PITA is to largely not be worth it, combined with a overall package that wasn’t really intended to be headless managed in a datacenter and a sub-par networking that sufficed for internet gaming, but not as a cluster interconnect.
IBM did have higher end cell processors, at predictable IBM level pricing in more appropriate packaging and management, but it was pretty much a commercial flop since again, the Cell processor just wasn’t worth the trouble to program for.


Unlikely.
Businesses generally aren’t that stoked about anything other than laptops or servers.
To the extent they have desktop grade equipment, it’s either:
On servers, the steam machine isn’t that attractive since it’s not designed to either be slapped in a closet and ignored on slotted in a datacenter.
Putting all this aside, businesses love simplicity in their procurement. They aren’t big on adding a vendor for a specific niche when they can use an existing vendor, even if in theory they could shave a few dollars in cost. The logistical burden of adding Steam Machine would likely offset any imagined savings. Especially if they had to own re-imaging and licensing when they are accustomed to product keys embedded in the firmware when they do vendor preloads today.
Maybe you could worry a bit more about the consumer market, where you have people micro-managing costs and will be more willing to invest their own time, but even then the market for non-laptop home systems that don’t think they need nVidia but still need something better than integrated GPUs is so small that it shouldn’t be a worry either.


Consoles are sold at a loss, and they recover it with games because the platform is closed.
Sometimes, but evidently not currently. Sources seem to indicate that only Microsoft seems to say they are selling at a loss, though it seems odd since their bill of materials looks like it should be pretty comparable to PS5…
I’ll agree with the guess of around $800, but like you say, the supply pressure on RAM and storage as well as the tariff situation all over the place, hard to say.


I think it’s a response to the sentiment that Sony somehow got bit by selling PS3 at a loss because it triggered some huge supercomputing purchases of the systems that Sony wouldn’t have liked, and that if Valve got too close to that then suddenly a lot of businesses would tank it by buying too much and never buying any games.
Sony loved the exposure and used it as marketing fodder that their game consoles were “supercomputer” class. Just like they talked up folding@home on them…
Yeah, but as adults we start to just declare we are going to suck it up more.
Wait, being irritated by tags in shirt is an autism thing? I just thought it was a pretty common kid thing…


But the two were all smiles throughout, with Trump even siding with the soon-to-be first Muslim mayor of New York over one of his GOP allies, Rep. Elise Stefanik, who’d called Mamdani a “jihadist.”
“She’s out there campaigning and you say things sometimes in a campaign,” Trump said of Stefanik, who’s running for governor of New York. “I met with a man who’s a very rational person. I met with a man who wants to see New York be great again.”
Well that’s not what I expected…


Keep in mind that the critical affordability issue as it landed in the news as we recovered from COVID and also supply chain impacts from Ukraine war. During his first term, inflation was pretty much the same as it had been since 1990. Then during Biden’s term, there was 7% then a further 6.5% on top of that and then another 3.4% on top of that and then 2.9% on top of that. So there’s a correlation that things are now even more rapidly unaffordable and in such cases the president inevitably gets the blame whether it makes sense or not.
His first term was pretty incompetent and corrupt, but got nowhere near as maliciously and successfully corrupt as this go around. On the matter of deaths, while the USA by the data was among the worst, almost in the 10 worst nations for per-capita death, the subjective coverage was “globally lots of people are dying”, it’s not Trump’s fault specifically in that perception of “no one has it good”.
Generally speaking, in these circumstances people are just voting against the state of the way things are with less high minded ideals. Trump lost because people hated things under COVID. Harris lost because the economic reaction to recovery was all messed up and so a change was demanded.
I share the shock that people actually went for it, but I’m not surprised that this seemingly nonsensical situation could happen.


Yes, just some people figuring out that Grok was steered toward ass-kissing Musk no matter what, and exploited that for funny output. So the takeaways are:


help explain the relationships in a complicated codebase succinctly
It will offer an explanation, one that sounds consistent, but it’s a crap shoot as to whether or not it accurately described the code, and no easy of knowing of the description is good or bad without reviewing it for yourself.
I do try to use the code review feature, though that can declare bug based on bad assumptions often as well. It’s been wrong more times than it caught something for me.


No, just complete. Whatever the dude does may have nothing to do with what you needed it to do, but it will be “done”


Also assuming it became prolific enough to appear in output, would that mean it is “correct”?


I would assume that a screen reader will pronounce it properly. If it doesn’t, then that reader needs an update. Still think it’s a pointless thing to try to resurrect that character from the past and kind of annoying, but at least screen readers should in principle be able to pronounce it.
Note that this outage by itself, based on their chart, was kicking out errors over the span of about 8 hours. This one outage would have almost entirely blown their downtown allowance under 99.9% availability criteria.
If one big provider actually provided 99.9999%, that would be 30 seconds of all outages over a typical year. Not even long enough for people to generally be sure there was an ‘outage’ as a user. That wouldn’t be bad at all.
It’s pretty much a vibe coding issue. What you describe I can recall being advocated forevet, the project manager’s dtram that you model and spec things out enough and perfectly model the world in your test cases, then you are golden. Except the world has never been so convenient and you bank on the programming being reasonably workable by people to compensate.
Problem is people who think they can replace understanding with vibe coding. If you can only vibe code, you will end up with problems you cannot fix and the LLM can’t either. If you can fix the problems, then you are not inclined to toss overly long chunks of LLM stuff because they generate ugly hard to maintain code that tends to violate all sorts of best practices for programming.