Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 20 Posts
  • 1.02K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • He could probably run an NFS server that isn’t a closed box, and have that just use the Synology box as storage for that server. That’d give whatever options Linux and/or the NFS server you want to run have for giving fair prioritization to writes, or increasing cache size (like, say he has bursty load and blows through the cache on the Synology NAS, but a Linux NFS server with more write cache available could potentially just slurp up writes quickly and then more-slowly hand them off to the NAS).

    Honestly, though, I think that a preferable option, if one doesn’t want to mess with client global VM options (which wouldn’t be my first choice, but it sounds like OP is okay with it) is just to crank up the timeout options on the NFS clients, as I mention in my other comment, if he just doesn’t want timeout errors to percolate up and doesn’t mind the NAS taking a while to finish whatever it’s doing in some situations. It’s possible that he tried that, but I didn’t see it in his post.

    NFSv4 has leases, and — I haven’t tested it, but it’s plausible to me from a protocol standpoint — it might be possible that it can be set up such that as long as a lease can be renewed, it doesn’t time out outstanding file operations, even if they’re taking a long time. The Synology NAS might be able to avoid timing out on that as long as it’s reachable, even if it’s doing a lot of writing. That’d still let you know if you had your NFS server wedge or lost connectivity to it, because your leases would go away within a bounded amount of time, but might not time out on time to complete other operations. No guarantee, just it’s something that I might go look into if I were hitting this myself.


  • That’s a global VM setting, which is also going to affect your other filesystems mounted by that Linux system, which may or may not be a concern.

    If that is an issue, you might also consider — I’m not testing these, but would expect that it should work:

    • Passing the sync mount option on the client for the NFS mount. That will use no writeback caching for that filesystem, which may impact performance more than you want.

    • Increasing the NFS mount options on the client for timeo= or retrans=. These will avoid having the client time out and decide that the NFS server is taking excessively long (though an operation may still take longer to complete if the NFS server is taking a while to respond).




  • I think that kisses having magic powers is just something of a general theme for stories at the time and place that the Brothers Grimm were collecting folklore, not something gender-specific.

    https://en.wikipedia.org/wiki/Sleeping_Beauty

    “Sleeping Beauty” (French: La Belle au bois dormant, or The Beauty Sleeping in the Wood;[1][a] German: Dornröschen, or Little Briar Rose, Italian: La Bella Addormentata), also titled in English as The Sleeping Beauty in the Woods, is a fairy tale about a princess cursed by an evil fairy to sleep for a hundred years before being awakened by a handsome prince.

    The version collected and printed by the Brothers Grimm was one orally transmitted from the Perrault version,[10] while including its own attributes like the thorny rose hedge and the curse.[11]

    There, it’s a prince’s kiss that breaks a curse.

    https://en.wikipedia.org/wiki/The_Frog_Prince

    “The Frog Prince; or, Iron Henry” (German: Der Froschkönig oder der eiserne Heinrich, literally “The Frog King or the Iron Henry”) is a German fairy tale collected by the Brothers Grimm and published in 1812 in Grimm’s Fairy Tales (KHM 1).

    There, it’s a princess’ kiss.

    EDIT: Though I suppose one could take issue with the disproportionate-to-population level of royalty involved in doing all this kissing.

    EDIT2: You know, oddly enough, I’m racking my brain and I can’t think of present-day legends and stories where kisses do magical or supernatural things. There are some characters I can think of where a kiss might have some incidental effect — I’m pretty sure that I vaguely remember there being some Marvel Comics X-Men story where Rogue kisses her boyfriend and puts him in a coma, as an incidental effect of skin-on-skin contact. There are some kiss-adjacent things, like vampire stories where a kiss segues into a bite on the neck. But magical kissing seems to be out-of-vogue today.



  • Oh, this is neat.

    I do kind of wish that there was more of a summary of how it works on the page from a user standpoint. For example, the page links to niri, which the author previously used. That describes the basic niri paradigm right up top:

    Windows are arranged in columns on an infinite strip going to the right. Opening a new window never causes existing windows to resize.

    Every monitor has its own separate window strip. Windows can never “overflow” onto an adjacent monitor.

    Workspaces are dynamic and arranged vertically. Every monitor has an independent set of workspaces, and there’s always one empty workspace present all the way down.

    The workspace arrangement is preserved across disconnecting and connecting monitors where it makes sense. When a monitor disconnects, its workspaces will move to another monitor, but upon reconnection they will move back to the original monitor.

    I mean, you can very quickly skim that and get a rough idea of the way niri would work if you invested the time to download it and get it set up and use it.

    It does say that reka uses river, and maybe that implies certain conventions or functionality, but I haven’t used any river-based window managers, so it doesn’t give me a lot of information.


  • Game streaming serices are never going to catch on because the capital needed to build out the infrastructure is ridiculous.

    I don’t know about “never”, but I’ve made similar arguments on here predicated on the cost of building out the bandwidth — I don’t think that we’re likely going to get to the point any time soon where computers living in datacenters are a general-purpose replacement for non-mobile gaming, just because of the cost of building out the bandwidth from datacenter to monitor. Any benefit from having a remote GPU just doesn’t compare terribly well with the cost of having to effectively have a monitor-computer cable for every computer that might be used concurrently to the nearest datacenter.

    But…I can think of specific cases where they’re competitive.

    First, where power is your relevant constraint. If you’re using something like a cell phone or other battery-powered device, it’s a way to deal with power limitations. I mean, if you’re using even something like a laptop without wall power, you probably don’t have more than 100 Wh of battery power, absent USB-C and an external powerstation or something, due to airline restrictions on laptop battery size. If you want to be able to play a game for, say, 3 hours, then your power budget (not just for the GPU, but for everything) is something like 30W. You’re not going to beat that limit unless the restrictions on battery size go away (which…maybe they will, as I understand that there are some more-fire-safe battery chemistries out there).

    And cell phone battery restrictions are typically even harder, like, 20 Wh. That means that for three hours of gaming, your power budget because of size constraints on the phone is maybe about 6 watts.

    If you want power-intensive rendering on those platforms doing remote rendering is your only real option then.

    Second, there are (and could be more) video game genres where you need dynamically-generated images, but where latency isn’t really a constraint. Like, a first-person shooter has some real latency constraints. You need to get a frame back in a tightly bounded amount of time, and you have constraints on how many frames per second you need. But if you were dynamically-rendering images for, I don’t know, an otherwise-text-based adventure game, then the acceptable time required to get a new frame illustrating a given scene might expand to seconds. That drastically slashes the bandwidth required.

    What I don’t think is going to happen in the near future is “gaming PC/non-portable video game consoles get moved to the datacenter”.



  • I don’t know what the situation is for commercial games — I don’t know if there’s a marketplace like that — but I do remember someone setting up some repository for free/Creative Commons assets a while back.

    goes looking

    https://opengameart.org/

    It’s not highly-structured in the sense that someone can upload, say, a model in Format X and someone else can upload a patch against that model or something like that with improvements and changes, though. Like, it’s not quite a “GitHub of assets”.

    I haven’t looked at it over time, but I also don’t think that we’ve had an explosion in inter-compatible assets there. Like, it’s not like a community forms around a particular collection of chibi-style sprite artwork at a particular resolution, and then lots of libre games use those assets, the way RPGMaker or something has collections of compatible commercial assets.

    I’m sure that there must be some sort of commercial asset marketplace out there, probably a number, though I don’t know if any span all game asset types or if they permit easily republishing modifications. I know that I’ve occasionally stumbled across a website or two that have individuals sell 3D models.


  • My first question is “why is that the case?”

    Like, is FAIR being (rationally) chosen because people simply cannot afford the private plans on offer, and private plans don’t provide a minimal-enough level of coverage? If so, maybe the problem is actually that we need more availability of housing, that people are financially-stretched too far.

    Or are people irrationally getting too little fire insurance, and FAIR just provides an opportunity to do that? Then you’d think that we should improve the information available about fire insurance plans.

    Or is FAIR providing a better cost-for-value, in which case one would want to look a breakdown of why private plans would cost more — like, is the market not competitive?

    My own gut guess is that the most-likely largest culprit is the first, because I am comfortable saying that California has a very real undersupply of housing, which makes housing highly-unaffordable in California, and causes people to be under greater financial pressure. Like, we’d like to have more housing, which would reduce housing prices, which would permit people to spend less on housing, which would permit people to be less-financially-stretched, which would let people, among other things, spend more on insurance for that housing. I don’t know that that’s the dominant factor, but I’m pretty sure that it is a factor.

    searches for an affordability metric

    https://www.affordabilityindex.org/rankings/states/

    On this metric, California ranks #43 out of 50 states plus DC on housing affordability, measuring what housing costs relative to income. That’s not the bottom of the bin, so it’s likely that one can’t chalk it up only to that, but it isn’t great, and I’d bet that it is a substantial factor.

    takes another look at the metric

    I’d also guess that it’s pretty good odds that the ratio being computed (income to price) is very probably using pre-tax income, and California is exceptionally high in absolute cost of housing among the states, second only to Hawaii. Because we use a progressive income tax system, having higher income means that each additional dollar in income goes less towards making housing affordable as income rises — instead, some of it goes towards effectively subsidizing standard of living in other states that have lower median income. So you’d expect the affordability issues in California to be more-severe than just that ratio suggests; California’s higher income would have less real-world effect than lower housing prices in other states relative to the ratio that the metric uses.

    EDIT: This affordability metric ranks California at #47 on housing affordability out of 50 states plus DC.

    https://www.realtor.com/research/state-report-cards-2025/

    It’s also based on median (I assume pre-tax) income relative to house price.




  • I think that you have two factors here. GDC isn’t specific to PC gaming, and additionally, a lot of titles will see both PC and console releases.

    For a game that is intended to see only a PC release, my guess is that that that might affect system requirements of the game.

    For games that see console releases, things like “will fewer people have consoles” — because current-gen consoles are very unlikely to change spec, just price, is how this manifests itself. “Is the Playstation 6 going to be postponed” is a big deal if you were going to release a game for that hardware.





  • As it currently exists on other platforms, Gaming Copilot lets you ask guide-like questions about the game you’re currently playing. Microsoft’s official site offers an example question like “Can you remind me what materials I need to craft a sword in Minecraft?”

    I haven’t used consoles for a few generations, but historically, switching between a game and a Web browser on a console wasn’t all that great, and text entry wasn’t all that great. I dunno if things have improved, but it was definitely a pain in the neck to refer to a website in-game historically.

    On Linux, Wayland, I swap between fullscreen desktops when playing games, and often have a Web browser with information relevant to the game on another desktop. If it helps enable some approximation of a workflow like that for console players, that doesn’t sound unreasonable.

    There are other objections I’d have, like not really wanting someone logging what my voice sounds like or giving Microsoft even more data on me to profile with via my searches. But it sounds to me like the basic functionality has a point.



  • What makes this worse is that git servers are the most pathologically vulnerable to the onslaught of doom from modern internet scrapers because remember, they click on every link on every page.

    The especially disappointing thing is that, for the specific case that Xe was running into, a better-written scraper could just recognize that this is a public git repository and just git clone the thing and get all the useful code without the overhead. Like, it’s not even “this scraper is scraping data that I don’t want it to have”, but “this scraper is too dumb to just scrape the thing efficiently and is blowing both the scraper’s resources and the server’s resources downloading innumerable redundant copies of the data”.

    It’s probably just as well, since the protection is relevant for other websites, and he probably wouldn’t have done it if he hadn’t been getting his git repo hammered, but…

    EDIT: Plus, I bet that the scraper was requesting a ton of files at once from the server, since he said that it was unusable. Like, you have a zillion servers to parallelize requests over. You could write a scraper that requested one file at once per server, which is common courtesy, and you’re still going to be bandwidth constrained if you’re schlorping up the whole Internet. Xe probably wouldn’t have even noticed.


  • https://en.wikipedia.org/wiki/National_Helium_Reserve

    The National Helium Reserve, also known as the Federal Helium Reserve, was a strategic reserve of the United States, which once held over 1 billion cubic meters (about 170,000,000 kg)[a] of helium gas.

    The Bureau of Land Management (BLM) transferred the reserve to the General Services Administration (GSA) as surplus property, but a 2022 auction[10] failed to finalize a sale.[11] On June 22, 2023, the GSA announced a new auction of the facilities and remaining helium.[12] The auction of the last helium assets was due to take place in November, 2023.[13] Though the last of the Cliffside reserve was to be sold by November 2023, more natural gas was discovered at the site than was previously known, and the Bureau of Land Management extended the auction to January 25, 2024 to allow for increased bids.[14] In 2024 the remaining reserve was sold to the highest bidder, Messer Group.[15]

    Arguably not the best timing on that.