I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 116 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle



  • We do run .deb/.rpm files from random websites though.

    In general with Linux sites with deb/rpm/etc files would usually include hashes for the genuine versions etc. Not to say the actual author of these could be malicious.

    And you mentioned flatpak too. Appimage is quite popular too, and afaik that doesn’t have any built-in sandboxing at all.

    Even with sandboxing, they generally need access to save files/load files etc from the host environment. Where are these connections defined? Could a malicious actor for example grant their malicious appimage/flatpak more access? Genuine questions, I’ve never looked into how these work.



  • r00ty@kbin.lifetoLinux@lemmy.worldLinux Antivirus?
    link
    fedilink
    arrow-up
    14
    ·
    11 days ago

    I think there’s a few aspects to this whole subject.

    First of all for a long time people have thought Linux not to be the target of malware. I would say that it has been a target and it has been for decades. I recall in the late 90s a Linux server at work was attacked, had a rootkit, IRC trojan and attack kit installed by script kiddies in Brazil. I think the nearest you can say is that desktop users aren’t usually a target, which is mostly true. But with the share of desktop installs hitting a high recently we should expect that to change.

    Second I think most windows antivirus products (including the built in one) are doing some active useful things. Most of these are not relevant on Linux (we generally don’t run setup.exe from random websites). However! Here’s where things get interesting. The rise of flatpak and other containerised applications. These I would say are very similar to setup.exe, and would make it trivial to embed malware into such a file. A Linux virus scanner could be checking these. Also we’ve seen direct attacks on distro repositories lately. I don’t expect this to slow down. We are most certainly a target now.

    Third, the other reason most Linux users don’t use virus scanners is because they’re usually technical people who would recognise (usually) something wrong and investigate/spot the malware. I would say two things are changing here. Simpler to install distros are bringing in less technical people to Linux and, the number of processes running on a machine doing effectively nothing in a desktop environment is way higher than it used to be. So technical people can be caught off guard. Also, a rootkit can hide all of these clues if done well.

    So I would say there’s a really good space to have a well made virus scanner/antivirus now. It is probably the right time for it.




  • I think my question on all this would be whether this would ultimately cause problems in terms of data integrity.

    Currently most amplifiers for digital information are going to capture the information in the light, probably strip off any modulation to get to the raw data. Then re-modulate that using a new emitter.

    The advantages of doing this over just amplifying the original light signal are the same reason switches/routers are store and forward (or at least decode to binary and re-modulate). When you decode the data from the modulated signal and then reproduce it, you are removing any noise that was present and reproducing a clean signal again.

    If you just amplify light (or electrical) signals “as-is”, then you generally add noise every time you do this reducing the SNR a small amount. After enough times the signal will become non-recoverable.

    So I guess my question is, does the process also have the same issue of an ultimate limit in how often you can re-transmit the signal without degradation.


  • Pretty sure this was made clear in the article but… I’ll outline the little I know on the subject as a complete layman.

    Currently we have been able to use quantum effects to create single runs of fibre that cannot be intercepted. That is, if the data is intercepted by any known means the receiver will be able to detect this.

    The shortcoming of this method, is that of course when you need to amplify the signal, that’s generally a “store and forward” operation and thus would also break this system’s detection. You could I guess perform the same operation wherever it is amplified, but it’s then another point in which monitoring could happen. If you want 1 trusted sender, 1 trusted receiver and nothing in between, this is a problem.

    What this article is saying, is they have found a way to amplify the information without ever “reading” it. Therefore keeping the data integrity showing as “unseen” (for want of a better word). As such this will allow “secure” (I guess?) fibre runs of greater distances in the future.

    Now the article does go into some detail about how this works and why. But, for the basic aspect of why this is a good and useful thing. This is pretty much what you need to know.










  • Yep, same. Well I actually remember finding the best ways to copy a game on a tape error free first. Some, without protection you could just save back to tape for a digital reproduction (and this also allowed tape to disk conversion). Actually those with non destructive copy protection could kinda be copied too if you knew a little Z80 ASM. Others, you needed to copy tape to tape and hope the quality turned out OK.

    But yes, then bringing your box of copied disks (Amiga in my case) into school and swapping with your friends was the way to go.