Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 14 Posts
  • 725 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle


  • Milsim games involve heavy ray tracing

    I guess it depends on what genre subset you’re thinking of.

    I play a lot of milsims — looks like I have over 100 games tagged “War” in my Steam library. Virtually none of those are graphically intensive. I assume that you’re thinking of recent infantry-oriented first-person-shooter stuff.

    I can only think of three that would remotely be graphically intensive in my library: ArmA III, DCS, and maybe IL-2 Sturmovik: Battle for Stalingrad.

    Rule the Waves 3 is a 2D Windows application.

    Fleet Command and the early Close Combat titles date to the '90s. Even the newer Close Combat titles are graphically-minimal.

    688(i) Hunter/Killer is from 1997.

    A number of of them are 2D hex-based wargames. I haven’t played any of Gary Grigsby’s stuff, but that guy is an icon, and all his stuff is 2D.

    If you go to Matrix Games, which sells a lot of more hardcore wargames, a substantial chunk of their inventory is pretty old, and a lot is 2D.



  • Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes4 or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we5 won’t tolerate this behavior!

    Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push6 or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.

    For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!

    I don’t use GitHub Actions and am not familiar with it, but if you’re using it for continuous integration or build stuff, I’d think that it’s probably a good idea to have that decoupled from GitHub anyway, unless you want to be unable to do development without an Internet connection and access to GitHub.

    I mean, I’d wager that someone out there has already built some kind of system to do this for git projects. If you need some kind of isolated, reproducible environment, maybe Podman or similar, and just have some framework to run it?

    like macOS builds that would be quite hard to get otherwise

    Does Rust not do cross-compilation?

    searches

    It looks like it can.

    https://rust-lang.github.io/rustup/cross-compilation.html

    I guess maybe MacOS CI might be a pain to do locally on a non-MacOS machine. You can’t just freely redistribute MacOS.

    goes looking

    Maybe this?

    https://www.darlinghq.org/

    Darling is a translation layer that lets you run macOS software on Linux

    That sounds a lot like Wine

    And it is! Wine lets you run Windows software on Linux, and Darling does the same for macOS software.

    As long as that’s sufficient, I’d think that you could maybe run MacOS CI in Darling in Podman? Podman can run on Linux, MacOS, Windows, and BSD, and if you can run Darling in Podman, I’d think that you’d be able to run MacOS stuff on whatever.


  • I’m all for running models locally, as long as one can handle the hardware cost of not sharing hardware with other users, for privacy reasons and so forth, but laptops aren’t a fantastic hardware platform for heavy parallel computation.

    • Limited ability to dissipate heat.

    • Many current laptops have a limited ability to be upgraded, especially on memory. Memory is currently a major limiting factor on model size, and right now, laptops are likely to be memory-constrained due to shortages and due to using soldered memory, most can’t be upgraded a couple years down the line when memory prices are lower.

    • Limited ability to use power on battery.

    In general, models have been getting larger. I think that it is very likely that for almost any area we can think of, we can get a better result by producing a larger model. There are tasks that don’t absolutely need a large model, but odds are that one could do better with a larger model.

    Another issue is that the hardware situation is rapidly changing, and it may be that there will be better hardware out that can erfoem significantly better before long.

    So unless you really, really need to run your computation on a laptop, I’d be inclined to run it on another box. I’ve commented on this before: I use a Framework Desktop to do generative AI stuff remotely from my laptop when I want to do so. I need very little bandwidth for the tasks I do, and anywhere I have a laptop and a cell link, it’s available. If I absolutely had to have a high-bandwidth link to it, or use it without Internet access, I’d haul the box along with me. Sure, it needs wall power (or at least a power station), but you aren’t going to be doing much heavy parallel computation on a laptop without plugging it into a wall anyway.

    Even with non-laptop hardware, unless you really, really want to do a bunch of parallel computation in the near future, especially since a lot of hardware costs have shot up, unless you are willing to pay the cost to do it now, you may be better off waiting until prices come down.

    EDIT: I’m also not at all convinced that a lot of the things that one thinks might need to be done on-laptop actually need to be done on-laptop. For example, let’s say that one likes Microsoft’s Recall feature of Copilot. I am pretty sure that someone could put together a bit of software to just do the image recognition and tagging on a home desktop when one plugs one’s laptop in at night to charge — log the screenshots at runtime, but then do the number crunching later. Maybe also do fancy compression then, bring the size down further. Yeah, okay, that way the search index doesn’t get updated until maybe the night, but we’ve had non-realtime updated file indexes for a long time, and they worked fine. I have my crontab on my Linux box update the locate database nightly to this day.

    EDIT2: Not to mention that if you have a parallel compute box on the network, it can be used by phones or whatever too. And you’ll probably get better results with the kind of image recognition with a much larger model that can run on a box like that.

    I mean, you want to always be solving a user problem. Do people want captioned screenshots of what they’ve been doing, to seach their usage history? Maybe. Do they need it immediately, and are they willing to make the associated cost, battery life, and caption-quality performance tradeoffs for that immediacy? I’m a lot more skeptical about that.

    EDIT3: And I do get that, if you want to provide remote access to a parallel compute box, self-hosting is hard today. But that still seems like a problem that Microsoft is in a prime position to work on. Make it plug-and-play to associate a parallel compute box with a laptop. Plug it in over USB-C like an eGPU. Have a service to set up a reverse proxy for the parallel compute box, or provide the option to use some other service. Hell, provide the option to do cloud compute on it.

    Steam does something like this with Steam Link for local network use, leveraging a “big” box for parallel compute so that small, portable devices can use it for games. Does Microsoft?

    searches

    Yeah.

    https://www.xbox.com/en-US/consoles/remote-play

    Play remotely from your Xbox console

    Play games installed on your console, including titles in the Xbox Game Pass library, on LG Smart TVs, Samsung Smart TVs, Amazon Fire TV devices, and Meta Quest headsets, as well as other browser supported devices like PCs, smart phones, and tablets.

    Yeah. They’re literally already selling parallel compute hardware that you put on your home network and then use on portable devices. I mean, c’mon, guys.










  • There was some similar project that the UK was going to do, run an HVDC submarine line down from the UK to Africa.

    searches

    https://en.wikipedia.org/wiki/Xlinks_Morocco–UK_Power_Project

    The Xlinks Morocco-UK Power Project is a proposal to create 11.5 GW of renewable generation, 22.5 GWh of battery storage and a 3.6 GW high-voltage direct current interconnector to carry solar and wind-generated electricity from Morocco to the United Kingdom.[1][2][3][4] Morocco has been hailed as a potential key power generator for Europe as the continent looks to reduce reliance on fossil fuels.[5]

    If built, the 4,000 km (2,500 miles) cable would be the world’s longest undersea power cable, and would supply up to 8% of the UK’s electricity consumption.[6][7][8] The project was projected to be operational within a decade.[9][10] The proposal was rejected by the UK government in June 2025.


  • I think that one thing to keep in mind is that while the DRAM price increases are very large and it has a very large impact, I’ve also seen a number of price increases that are much smaller on various other components. I think that to some extent, people are hyper-aware of price increases because of the very large changes on DRAM. It’s important to keep perspective on how large these are, not mentally merge “very large price increase on DRAM” with every other shift.

    This (rumor) is 6% to 10%. If it’s real, it’s meaningful, but but not overwhelming from an end consumer standpoint. Rotational drives also saw a single-digit price increase, IIRC — more than inflation, but not something staggering, and I remember seeing people getting really upset in a thread about that.




  • I bet a lot of people didn’t have fire insurance.

    EDIT:

    Many wildfire victims didn’t have insurance coverage at all.

    EDIT2:

    https://www.latimes.com/business/story/2025-01-12/california-homeowners-are-getting-cancelled-by-their-insurers-and-the-reasons-are-dubious

    Last year, Francis Bischetti said he learned that the annual cost of the homeowners policy he buys from Farmers Insurance for his Pacific Palisades home was going to soar from $4,500 to $18,000 — an amount he could not possibly afford.

    Neither could he get onto the California FAIR Plan, which provides fewer benefits, because he said he would have to cut down 10 trees around his roof line to lower the fire risk — something else the 55-year-old personal assistant found too costly to manage.

    So he decided he would do what’s called “going bare” — not buying any coverage on his home in the community’s El Medio neighborhood. He figured if he watered his property year round, that might be protection enough given its location south of Sunset Boulevard.

    So the insurer (accurately) predicts that the rate of fire across the entire area has become extremely high, and updates prices to reflect that. They’re doing what they probably should: predicting risk and passing that information through as price information.

    The state will insure at a lower rate, but requires a lot of changes to the property to reduce fire risk to go anywhere near it. That’s reasonable (well, maybe not whatever specific numbers are involved, but providing the option of lower prices for a homeowner taking actions to mitigate that extreme fire risk).

    But that mitigation itself isn’t free, and the homeowner is so living close to their financial limits that they can’t afford the mitigation. So they gamble: maybe a fire won’t happen. A fire does happen, and so now they can only really sell the property for the land value.

    So, what went wrong there?

    Well, I’d give two suggestions.

    • The insurer assessed risk correctly — its analysis was “this place is facing extreme risk of fire” and it passed that information on as price information…but it didn’t do so as early as it could have done. If the homeowner had known that they’d be facing $18k/year fire insurance some years earlier, maybe they wouldn’t have moved into the area in the first place, that it’d be just too expensive to live there. People sign insurance contracts on a shorter-term basis than they decide where to live. The incentives created is for insurers is to analyze risk over a year long basis. The price information gets passed on to homeowners, but it may not give them enough time to act. We could restructure the insurance market by doing something like capping annual percentage rates of increase once someone has obtained a given rate. That’d establish more-conservative and longer-term risk assessment by insurers, like, what will the fire risk be in the area five years from now, or ten years from now, not just over the next year. Do that and people will tend to pay more for fire insurance on average, because insurers will have to take into account that it’s harder to predict five years or ten years out. But it also will give homeowners a longer time window to make decisions.

    • I think that in general, people should maintain a larger financial buffer. “I need to have a number of trees on my property cut down in order to reduce my insurance rates against a very large hike” is the kind of unexpected expense that’s probably a good idea to pay. But if people don’t establish and maintain liquid assets for that kind of situation, then that option goes away.

      https://homeguide.com/costs/tree-removal-cost

      How much does tree removal cost?

      $400 – $1,200 average cost

      Assuming the upper level there as his worst case, the most that removing those ten trees would have cost was $12,000. Spending $12k to reduce annual costs by $18k is an investment that pays for itself in the first two-thirds of the first year. That is an awfully good move, financially. If you maintain a 6-months-of-income emergency fund, then that should probably cover it. But…if you can’t find the $12k, then it ain’t happening.

      I would very much like Americans to maintain a larger financial buffer than they do. Here, it would have paid off in a big way for the person involved: had they done it, they’d have had an insurance payout or maybe even saved their house. But…okay, lots of people have recommended maintaining a buffer, and it certainly isn’t something that always happens. Say that doesn’t happen here. What other options are there? Maybe the person here could have taken out $12k in debt against the house, and spread out paying that back. If the California state insurance program assessment is correct as to the reduction in risk from removing the trees — and unless the lower price is just reflecting state subsidy, it should — it might make sense for the state insurance program to say “Okay, I’ll offer an all-in-one package, with financing. You take out $N in debt against your equity. You agree to a series of mitigations that that funds, like removing trees on the property, and we provide the near-term funds for it.” That’s not free, but it’s also not one big $12k bill, either. Maybe that’s something that the homeowner in question could have done. That basically permits the state to more-rapidly have fixes for high fire risk to happen.

      The homeowner in question presumably — unless he’s violating his mortgage contract — has paid off his house, because his mortgage will very probably require him to maintain fire insurance as long has he has an outstanding mortgage, to mitigate their risk. So he should have equity to secure that loan.

      Okay, but say what he actually did was to violate his mortgage contract and just have both a mortgage and run without fire insurance. What then? How do you avoid that scenario? Well, one option might be for the state to require that mortgages come bundled with fire insurance, that the mortgage lender be the one to obtain a fire insurance contract — at least on the percentage of the property that they own — which forces the price information about fire insurance to be baked into the mortgage price. That’ll make mortgages in the state more expensive; it’ll also mean that either the mortgage lender or the insurer will have to do long-term risk assessment to price in fire risk, if you’re talking about mortgages that run on the order of decades. But it makes things a lot more straightforward for the homebuyer: one up-front number attached to purchase price that should represent long-term risk.


  • I have never used Arch. And it may not be worthwhile for OP. But I am pretty confident that I could get that thing working again.

    Booting into a rescue live-boot distro on USB, mount the Arch root somewhere, bind-mounting /sys, /proc, and /dev from the host onto the Arch root, and then chrooting to a bash on the Arch root and you’re basically in the child Arch environment and should be able to do package management, have DKMS work, etc.