Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 17 Posts
  • 907 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • My guess — without trying to dig up statistics — is that the single component most-likely to fail in an old PC is gonna be rotational hard drives. Virtually all of my rotational drives have eventually died, aside from a few that were just so small and taking up space where I could mount other things that I no longer bothered using them.

    I’ve seen fans die (not necessarily completely wedge up, but have the bearings go and become increasingly-obnoxious in sound).

    And those are basically the only mechanical components in a computer.

    Behind that, there’s input devices with keyswitches wearing out, but unless you’re using a laptop, replacing the input device is just unplugging the old one and plugging in a new one.

    I’m not gonna say that motherboards don’t fail, but I can’t immediately think of something that would die. Decades back, I remember that there was a spate of bad capacitors that made their way to a bunch of motherboards and would eventually fail, but I haven’t seen anything like that recently.

    searches

    Looks like it was 1999–2007:

    https://en.wikipedia.org/wiki/Capacitor_plague

    The capacitor plague was a problem related to a higher-than-expected failure rate of non-solid aluminium electrolytic capacitors between 1999 and 2007, especially those from some Taiwanese manufacturers,[1][2] due to faulty electrolyte composition that caused corrosion accompanied by gas generation; this often resulted in rupturing of the case of the capacitor from the build-up of pressure.

    High failure rates occurred in many well-known brands of electronics, and were particularly evident in motherboards, video cards, and power supplies of personal computers.

    A 2003 article in The Independent claimed that the cause of the faulty capacitors was due to a mis-copied formula. In 2001, a scientist working in the Rubycon Corporation in Japan stole a mis-copied formula for capacitors’ electrolytes. He then took the faulty formula to the Luminous Town Electric company in China, where he had previously been employed. In the same year, the scientist’s staff left China, stealing again the mis-copied formula and moving to Taiwan, where they created their own company, producing capacitors and propagating even more of this faulty formula of capacitor electrolytes.[3]

    Those would probably be from the DDR/DDR2 era, though.

    I do think that it’s probably possible that some motherboard components might age out. Like, people may want to use newer versions of radio stuff, like WiFi or Bluetooth. You can maybe do that via USB, but the on-motherboard stuff might become more of a liability than the CPU or something.

    I don’t think that I’ve ever personally had other computer components just up and fail other than the 13th and 14th gen Intel CPUs that internally destroyed themselves. It’s always been non-solid-state stuff, things with moving parts, that fail for me. I mean, I’ve damaged solid-state components myself via things that I’ve done, but it’s always damage that I incurred.

    thinks

    Oh, CMOS batteries eventually fail, but they’re usually — not always — mounted on motherboards with holders that permit replacement. I’ve had to replace those.

    I did have a headphones amplifier that was attached to my computer where some solder joints got a bad connection and I had to open it and resolder it, but I don’t know if I’d call that a “computer component” just because it was plugged into a computer.

    thinks more

    I did have the power supply used for a fluorescent backlight in a laptop display start to fail once. But, honestly, my experience has been that unless you actively go in and damage something, most solid state parts will just keep on trucking.


  • I also kind of think that the strongest argument for console gaming is competitive multiplayer, not single player.

    The fact that the consoles are closed and locked down inherently provides resistance to cheating and such, where the open PC world tries to (poorly) replicate a closed environment via kernel anti-cheat stuff. The console world having (well, more-or-less) one option when it comes to hardware means that everyone playing against each other has a fairly-level playing field — same input hardware, and people don’t get an edge from having fancier rendering hardware.

    For single-player gaming, those console strengths become weaknesses — for single-player games, it’s preferable for the player to be able to do things like freely mod games, upgrade hardware to get fancier graphics, provide a lot of options as to what input stuff to use, etc. It doesn’t hurt anyone else for me to have the game running however I want, so I should be able to do so. On the PC, a player gets to enjoy all that.

    If I were a console vendor and I were worried about the PC as a competing platform, I’d think that I’d try to emphasize my competitive multiplayer games, not single-player games.


  • The real problem with this sort of thing is that there’s no legal way to avoid it. If you’re operating a motor vehicle on public roads, you need to have a plate visible. You can’t obscure it.

    The laws requiring that visibility were made in an era where it wasn’t possible for someone like Flock to enable anyone who can aim a camera at a road to mass-log and aggregate and data-mine the movement it provides.

    The only real technical solution would be to back out the laws requiring license plates to be visible (and it wouldn’t be perfect, since Flock will still look for identifying oddities on a vehicle and try to log that too, like collision damage). But if you do that, then you lose an important tool for dealing with motor vehicle theft and finding vehicles involved in crimes.

    And there aren’t restrictions on selling or doing whatever companies want with the data. Or with data that they get from facial recognition/gait data in the future, or that sort of thing.

    My own personal preference would be for ALPRs to be generally illegal, outside of maybe some areas where logging is normally done by the government, like at border crossings. That’d be hard to enforce – someone could always run a rogue ALPR and it’d be hard to find — but it’d probably keep the scale down, avoid the mass deployment that makes the surveillance omipresent.

    And I think that it’s worth remembering that even if you are comfortable with, say, Flock’s policy on dealing with data, there’s no guarantee that they aren’t compromised — a lot of very sensitive databases have been compromised in the past.

    In the past, technical limitations permitted a certain level of privacy in society. It just wasn’t technically possible to build mass surveillance at scale, so it didn’t happen. But…as those technical barriers that some of us just took for granted go away, I think it’s worth asking whether we want to engineer in legislative barriers, to ensure that there is a certain amount of privacy provided members of society.


  • Yeah, honestly, if it becomes enough of an issue, maybe eBay and similar should create separate sections for machines with memory and those without. I mean, there are reasons people would want to get a system without memory too, especially if one’s looking for other parts, but I do totally get that it’s super-obnoxious if there isn’t a way to filter those out and one is looking for one with memory.

    checks

    It doesn’t look like eBay has a “0 GB” memory category, annoyingly enough, but they do have a “Not specified” category with a ton of listings. That’s not absolutely the same thing, since if you filter “Not specified” out, I’m sure that it might also exclude some listings that have an unknown amount of memory, but I’d guess that that’d get you most of the way there, and I do see people clearly listing machines with no memory in that category.

    EDIT: Honestly, the rate of mis-classified listings there by users is pretty bad, even aside from eBay not providing a “0 GB” category. I was very surprised to see that there were a bunch of 512 GB listings. Looks like that’s essentially all people selling machines with 512 GB SSDs and choosing the wrong option.


  • I don’t know of a pre-wrapped utility to do that, but assuming that this is a Linux system, here’s a simple bash script that’d do it.

    #!/bin/bash
    
    # Set this.  Path to a new, not-yet-existing directory that will retain a copy of a list
    # of your files.  You probably don't actually want this in /tmp, or
    # it'll be wiped on reboot.
    
    file_list_location=/tmp/storage-history
    
    # Set this.  Path to location with files that you want to monitor.
    
    path_to_monitor=path-to-monitor
    
    # If the file list location doesn't yet exist, create it.
    if [[ ! -d "$file_list_location" ]]; then
        mkdir "$file_list_location"
        git -C "$file_list_location" init
    fi
    
    # in case someone's checked out things at a different time
    git -C "$file_list_location" checkout master
    find "$path_to_monitor"|sort>"$file_list_location/files.txt"
    git -C "$file_list_location" add "$file_list_location/files.txt"
    git -C "$file_list_location" commit -m "Updated file list for $(date)"
    

    That’ll drop a text file at /tmp/storage-history/files.txt with a list of the files at that location, and create a git repo at /tmp/storage-history that will contain a history of that file.

    When your drive array kerplodes or something, your files.txt file will probably become empty if the mount goes away, but you’ll have a git repository containing a full history of your list of files, so you can go back to a list of the files there as they existed at any historical date.

    Run that script nightly out of your crontab or something ($ crontab -e to edit your crontab).

    As the script says, you need to choose a file_list_location (not /tmp, since that’ll be wiped on reboot), and set path_to_monitor to wherever the tree of files is that you want to keep track of (like, /mnt/file_array or whatever).

    You could save a bit of space by adding a line at the end to remove the current files.txt after generating the current git commit if you want. The next run will just regenerate files.txt anyway, and you can just use git to regenerate a copy of the file at for any historical day you want. If you’re not familiar with git, $ git log to find the hashref for a given day, $ git checkout <hashref> to move where things were on that day.

    EDIT: Moved the git checkout up.



  • There is existing DDR4 in existing machines that can be scavenged that would otherwise probably just be thrown out. I understand that secondhand memory was an industry even before the surge, remember reading a recent article about some California company that would strip servers of old DIMMs and sell them, mostly to China. The CEO was being interviewed, said that sales had surged recently.

    searches

    I don’t think that these guys are them, think this is a different California company doing basically the same thing, but illustrates the point:

    https://www.ramexchange.net/

    1GB–128 GB modules (DDR2 / DDR3 / DDR4 / DDR5)

    At Ram Exchange, we supply new, used, and refurbished RAM for a wide range of applications. Whether you’re upgrading a personal computer, laptop, data center, or need on-board ICs for custom projects, our team is here to help.

    We also provide IT Asset Disposition (ITAD) services, dedicated to helping businesses securely and responsibly manage their end-of-life IT assets. We offer a comprehensive suite of tailored services—including certified data destruction, secure electronics recycling, remarketing, and asset redeployment—transforming IT disposal into a seamless process that maximizes value and environmental responsibility.

    Large-Scale Purchasing Power

    We buy excess memory in bulk from around the globe, including from publicly traded companies and Fortune 500 enterprises. With our extensive purchasing capabilities, no quantity is too large for us to handle.

    I mean, I’ve thrown out old DIMMs. Wasn’t worth my time hassling with trying to resell them. But if they’re worth enough due to price increases, it’ll increase the number of companies who are willing to go to the effort to recoup some of the value of the DIMMs. Companies can buy them, re-certify them, and sell them.

    Obviously, that’s not an unlimited supply, but the window in which it’s of increased interest is probably only something like three years, so it doesn’t have to last forever (or even fully offset the shortage to make sense to do, just partially-mitigate it).



  • I’m not particularly enamored of publicly-owned utilities, but one data point in their favor — Santa Clara uses their own power utility, and their rates are considerably lower than PG&E’s.

    https://en.wikipedia.org/wiki/Silicon_Valley_Power

    Silicon Valley Power (SVP) is a not-for-profit municipal electric utility owned and operated by the City of Santa Clara, California, United States. SVP provides electricity service to approximately 55,116 residential and business customers, including large corporations such as Intel, Applied Materials, Owens Corning and NVIDIA. SVP also owns and maintains a dark fiber network named SVP Fiber Enterprise.

    searches

    Well, this is SVP’s site, so not really an objective source, but I think it makes the point, and I’ve read about it elsewhere.

    https://www.siliconvalleypower.com/residents/rates-and-fees

    SVP D-1 average residential rate is $0.182/kWh.

    PG&E E-1 average residential rate is $0.422/kWh.

    $.18/kWh isn’t amazing by US standards, but it’s much closer to typical US rates than California as a whole is.



  • Is this worth the effort?

    In terms of electricity cost?

    I wouldn’t do it myself.

    If you want to know whether it’s going to save money, you want to see how much power it uses — you can use a wattmeter, or look up the maximum amount on the device ratings to get an upper end. Look up how much you’re paying per kWh in electricity. Price the hardware. Put a price on your labor. Then you can get an estimate.

    My guess, without having any of those numbers, is that it probably isn’t.



  • tal@lemmy.todaytoProgramming@programming.devKeyboard latency
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 days ago

    Another thing to note about gaming keyboards is that they often advertise “n-key rollover” (the ability to have n simulataneous keys pressed at once — for many key combinations, typical keyboards will often only let you press two keys at once, excluding modifier keys). Although not generally tested here, I tried a “Razer DeathStalker Expert Gaming Keyboard” that advertises “Anti-ghosting capability for up to 10 simultaneous key presses”. The Razer gaming keyboard did not have this capability in a useful manner and many combinations of three keys didn’t work. Their advertising claim could, I suppose, technically true in that 3 in some cases could be “up to 10”, but like gaming keyboards claiming to have lower latency due to 1000 Hz polling, the claim is highly misleading at best.

    That being said, the real issue was keyboards that used matrix encoders, where all keys were represented in a matrix, addressed by one line going high on the X axis and one line going high on the Y axis. I understand that this is cheaper, and expect that it’s probably because this requires running fewer traces from the keys to the controller than doing one for each. It looks Something like:

    - X1 X2 X3
    Y1 “Q” “W” “E”
    Y2 “R” “T” “Y”
    Y3 “U” “I” “O”

    That’s just a 3x3 matrix, as an example. So if I press “Q” on my keyboard, the X1 and Y1 line will go high. If I keep it pressed and then additionally press the “W” key, the Y1 line, which is already high, will stay high. The X2 line will then also go high. The controller can detect the keypress, since a new line has gone high.

    If I keep both keys pressed and then additionally press the “R” key, then the X1 line is already high due to the “Q” key being down, and will stay high. The “Y2” line will go high. The controller can detect the keypress.

    However, if I then press the “T” key, it can’t be detected. Pressing it would normally send the X2 line and Y2 line high, but both are already high due to existing keys being pressed.

    In practice, keyboard manufacturers try to lay out their matrix to try to minimize these collisions, but there’s only so much they can do with a matrix encoder. They’ll also normally run independent lines for modifier keys.

    A controller using a matrix encoding can always detect at least two keys being simultaneously pressed, but may not be able to detect a third.

    Matrix encoders aren’t really an issue when typing, but some games do require you to press more than two non-modifier keys at once. For example, it’s common to use the “WASD” keys for movement, and moving diagonal requires holding two of those. if someone is playing a game that requires pressing another key or two at once, those might collide.

    As I recall, USB sends the full state of the keyboard, not events specific to a button when a button is pressed. There are protocol-level restrictions on the number of “pressed keys” that can be pushed. That means that USB keyboards don’t support n-key rollover, and are why you’ll see some companies selling gaming keyboards with a PS/2 option — because that protocol does send state on a per-button basis. (It’s also why, for those of us that have used PS/2 keyboards and have experienced this, it’s possible to get a key on a PS/2 keyboard “stuck” down until it’s pressed again if the OS, for whatever reason, misses a key-up event.) USB gaming keyboards probably (hopefully) won’t actually advertise n-key rollover. But they can avoid using a matrix encoder, and in general, one really doesn’t need n-key rollover for playing games — just the ability to detect up to the USB limit. We only have ten fingers, and I don’t think that there are any games that require even something like six keys to be down at once.

    Obviously, in the case the author hit with the Razer keyboard, it wasn’t able to do that. I’m not sure what they’re doing (unless they’re simply completely fabricating their feature claim, which I assume that they wouldn’t). They might be using a larger matrix and sparsely-populating it, though I’m guessing there.


  • You can definitely feel 100 ms in input response time. That’s about what an analog modem’s latency would be. I can tell you, that’s very much noticeable on a telnet or ssh connection when you’re typing (though to be fair, what matters there is really round-trip time, so one should probably double that).

    On that note, if someone hasn’t run into it, mosh uses UDP and adaptive local echo to shave down network latency for terminal connections, and might be worth looking into if you often do remote work in a terminal over a WAN. It uses ssh to bootstrap auth (if you’re concerned about using less-widely-used thing what does network authentication, which I remember I was). I find that it makes things more pleasant, and also like some of its other features, like auto-reconnecting after an arbitrary length of time. One can just close a laptop and then reopen it a week later and the terminals function. Tmux and GNU screen can also do something similar — and in fact, I think that mosh and tmux are good packages to pair with each other — but they don’t do quite the same thing, as they require (a) manual re-establishment of connection and are (b) aimed at letting one reconnect from different clients. It also displays a notice in the terminal if there’s temporary network unavailability until it’s re-established communication, so the user isn’t simply staring at his screen wondering whether the software on the remote machine is being unresponsive or whether it’s a network issue.


  • tal@lemmy.todaytoProgramming@programming.devKeyboard latency
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    That’s…actually a substantial amount more latency than I’d expected. Not exactly the same thing, but for perspective, while I haven’t played multiplayer competitive FPSes for many years, back when I did, the limit of what I could really “feel” when it came to network latency was around 10 milliseconds. The latency the keyboards are adding, if it’s as high as measured, is a really substantial amount of delay to be adding if you’re talking video games.

    considers

    Note that depending upon the keyswitch mechanism, the controller does need to debounce the thing to avoid duplicate keypresses. I’ve used a keyboard before with a controller that didn’t adequately debounce, and it was extremely obnoxious — occasionally would get duplicate keypresses, and I had to filter it out at the level of my computer.

    However, if you look at gamepad button latency, they also need to worry about bounce, and their latency is much lower:

    https://gamepadla.com/

    You can get gamepads with sub-2-millisecond latency on USB.

    EDIT: Note that one thing that I learned from following !ergomechkeyboards@lemmy.world is that there are some semi-standardized open-source firmwares for (fancy, expensive) microcontroller-based keyboards; I believe that QMK is popular. I don’t know how the latency on those microcontroller-based keyboards compare, but assuming that there aren’t any fundamental constraints imposed by the other hardware on the keyboard, it might be possible to shave some time off of that by tweaking the firmware.

    I believe that at least some keyswitch mechanisms become more prone to bouncing over time, but if so, it might be possible for a microcontroller to detect bounces and tune the wait time to the mechanism on a given keyboard to adapt to mechanism wear.