Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 20 Posts
  • 1.09K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • Even if one agreed with him, he’s not actually doing, in this post, what he proposed there, which is linking to the source:

    Linking to the source should be the primary way of attribution.

    Like, setting aside the whole question of whether-or-not stripping out the artist name is reasonable or not, what we’re actually getting is the comic with no artist name or source link. @JohnnyEnzyme@piefed.social had to dig up the original by doing a reverse image search and linked to the original himself.



  • tal@lemmy.todaytoComic Strips@lemmy.worldRorschach Test
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 hours ago

    It looks like it’s a free image hosting service, and given that and that @AntiBullyRanger@ani.social also said that they couldn’t reach it directly, my guess is that there’s probably a number of people who upload content that violates someone’s requirements and get the host blacklisted by folks of a censorious nature.

    EDIT: @over_clox@lemmy.world and @AntiBullyRanger@ani.social: A Lemmy instance can be set up to proxy images for remote sites. This has some privacy benefits (someone can’t harvest IP addresses of Lemmy users by just submitting images and waiting to see which IP addresses load them) and also the incidental benefit of bypassing restrictions like this, as long as your Lemmy home instance is accessible on the network that is blocking the image host. The home instance I use, lemmy.today, does this, and I’m sure that there are others. You might consider setting up another account on a second home instance that does that to work around this, if it’s common for you where you are.

    https://lemmy.today/post/50406412 is this post on lemmy.today, for example.

    The link that my browser actually loads is https://lemmy.today/api/v3/image_proxy?url=https%3A%2F%2Fi.ibb.co%2FG3CVVyq2%2Fgghhhh.jpg

    The downside is that your lemmy home instance has to spend the extra bandwidth and storage space to serve the images, so it requires the admin to be able and willing to expend the server resources on it.


  • I assume so. Here’s a video of someone floating a boat (apparently in air) in it, and then sinking it by pouring cups of sulfur hexafluoride over it:

    https://www.youtube.com/watch?v=ee2NaYRnRGo

    If it avoids diffusing into air to the degree that you can scoop it up and pour it, I’d imagine that it’d pour out of one’s lungs the same way.

    But if you just want to get most of it out of your lungs — like, you’ve been breathing it and don’t want to asphyxiate — I imagine that exhaling all the air you can and inhaling air and doing that a few times would probably do a pretty good job, the way the Mythbusters video above did with the helium.



  • I’d guess that most industrial users of helium don’t consume it and could theoretically recover it from whatever process it’s involved in rather than just releasing it.

    EDIT: Hard drives being an exception, as apparently some ship helium-filled; there, it’s actually being consumed during the manufacture.

    EDIT2: I’d also point out that in the long run, we probably do have to be more conservative with our helium supply. We get it from pockets in the earth. It’s actually not all that common; it just happens, though, that we go to a lot of effort to extract natural gas, and that happens to sometimes also come up with helium, so we get that supply. But because it’s not reactive, it doesn’t bond to anything — it stays in gas form. When we let it go, it heads to near the top of our atmosphere and eventually gets lost to solar wind. Many users who today just release it — because why not, as the natural gas people will be providing more, and it’s cheaper that way — probably will need to capture what they’re using if we want helium to continue to be available.




  • tal@lemmy.todaytoComic Strips@lemmy.worldNavy warship
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    https://en.wikipedia.org/wiki/Haze_gray_and_underway

    Haze gray is a paint color scheme used by USN warships to make the ships harder to see clearly.[1][2][3] The gray color reduces the contrast of the ships with the horizon, and reduces the vertical patterns in the ship’s appearance.[4] It is the color of USN combatant and auxiliary surface ships, in contrast to the dark gray or black color of submarines, the bright colors of ceremonial vessels and aircraft, or the white of hospital ships and some U.S. Coast Guard cutters.

    Note that Twonks is British, and the Brits typically use a slightly different gray, with a more greenish cast (historically, IIRC, they selected paint color varying based on theater of operations, but I think that in practice, “warship gray” acts as more of a uniform to indicate that something is warship today than to reduce visibility, as in actual combat ranges in 2026, visual detection probably isn’t going to matter much).

    goes looking for their colors

    Apparently “weatherwork grey”.

    https://www.britmodeller.com/forums/index.php?/topic/235079311-modern-royal-navy-warship-grey/


  • I doubt that there’s actually a substantial impact on battery cell production. Might be on rack-mountable batteries containing those cells. But setting that aside:

    Panasonic plans to expand lithium-ion cell

    Non-rechargable AAA batteries are typically alkaline, and rechargeables are typically NiMH, not lithium-ion.

    EDIT: Looking at a handful of rack-mount lithium-ion batteries on Amazon price history using camelcamelcamel, prices are either unchanged or very slightly up. Could be Panasonic looking to get into the news, but it’s not clear to me that there’s a shortage of even rack-mount lithium-ion batteries.







  • I use “mono-9” in all my terminals, including for emacs. On my Debian trixie system, that maps to DejaVu Sans Mono in the fonts-dejavu-mono package.

    $ cat ~/.config/foot/foot.ini
    [main]
    font=mono-9
    $ fc-match mono-9
    DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book"
    $ fc-list|grep DejaVuSansMono.ttf
    /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: DejaVu Sans Mono:style=Book
    $ dpkg -S /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf
    fonts-dejavu-mono: /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf
    $
    

    https://en.wikipedia.org/wiki/DejaVu_fonts

    The DejaVu fonts are a superfamily of fonts designed for broad coverage of the Unicode Universal Character Set. The fonts are derived from Bitstream Vera (sans-serif) and Bitstream Charter (serif), two fonts released by Bitstream under a free license that allowed derivative works based upon them; the Vera and Charter families were limited mainly to the characters in the Basic Latin and Latin-1 Supplement portions of Unicode, roughly equivalent to ISO/IEC 8859-15, and Bitstream’s licensing terms allowed the fonts to be expanded upon without explicit authorization.

    The full project incorporates the Bitstream Vera license, an extended MIT License, which restricts naming of modified distributions and prohibits individual sale of the typefaces, although they may be embedded within a larger commercial software package (terms also found in the later Open Font License); to the extent that the DejaVu fonts’ changes can be separated from the original Bitstream Vera and Charter fonts, these changes have been deeded to the public domain.[1]



  • There are some memory latency benefits to putting memory on a single chip, but to date, that’s largely been handled by adding cache memory to the CPU, and later adding multiple tiers of it, rather than eliminating discrete memory.

    The first personal computer I used had 4kB of main memory.

    My current desktop has a CPU with 1MB of L1 cache, 16MB of L2 cache, 128MB of L3 cache, and then the system as a whole has 128GB of discrete main memory.

    Most of the time, the cache just does the right thing, and for software that is highly performance-sensitive, one might go use something like Valgrind’s cachegrind or something like that to profile and optimize the critical bits of software to minimize cache misses.

    I could believe that maybe, say, one could provide on-core memory that the OS could be more-aware of, say, let it have more control over the tiered storage. Maybe restructure the present system. But I’m more dubious that we’ll say “there’s no reason to have a tier of expandable, volatile storage off-CPU at all on desktops”.

    EDIT: That argument is mostly a technical one, but another, this one from a business standpoint. I expect PC builders have a pretty substantial business reason to not want to move to SoCs. Right now, PC builders can, to some degree, use price discrimination to convert consumer surplus to producer surplus. A consumer will typically pay disproportionately more for a computer with more memory, for example, when they purchase from a given vendor. If the system is instead sized at the CPU vendor, then the CPU vendor is going to do the same thing, probably more effectively, as there’s less competition in the CPU market, and it’ll be the PC builder seeing money head over to the CPU vendor — they’ll pay a premium for high-end SoCs.

    In Apple’s case, that’s not a factor, because Apple has vertically-integrated production. They make their own CPUs. Apple’s PC builder guys aren’t concerned about Apple’s CPU guys extracting money from them. But Dell or HP or suchlike don’t manufacture their own CPUs, and thus have a business incentive to maintain a modular system. Unless one thinks that the PC market as a whole is going to transition to a small number of vertically-integrated businesses that look like Apple, I guess, where you have one or two giant PC makers who basically own their supply chain, but I haven’t heard about anything like that happening.