• 2 Posts
  • 184 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2024

help-circle

  • Laser@feddit.orgtoSelfhosted@lemmy.worldHow to selfhost with a VPN
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Client data absolutely is encrypted in TLS. You might be thinking of a few fields sent in the clear, like SNI, but generally, it’s all encrypted.

    I never said it isn’t, but it’s done using symmetric crypto, not public key (asymmetric) crypto.

    Asymmetric crypto is used to encrypt a symmetric key, which is used for encrypting everything else (for the performance reasons you mentioned).

    Not anymore, this was only true for RSA key exchange, which was deprecated in TLS 1.2 (“Clients MUST NOT offer and servers MUST NOT select RSA cipher suites”). All current suites use ephemeral Diffie-Hellman over elliptic curves for key agreement (also called key exchange, but I find the term somewhat misleading).

    As long as that key was transferred securely and uses a good mode like CBC, an attacker ain’t messing with what’s in there.

    First, CBC isn’t a good mode for multiple reasons, one being performance on the encrypting side, but the other one being the exact reason you’re taking about: it is in fact malleable and as such insecure without authentication (though you can use a CMAC, as long as you use a different key). See https://pdf-insecurity.org/encryption/cbc-malleability.html for one example where this exact property is exploited (“Any document format using CBC for encryption is potentially vulnerable to CBC gadgets if a known plaintext is a given, and no integrity protection is applied to the ciphertext.”)

    As I wrote in my comment, I was a bit pedantic, because what was stated was that encryption protects the authenticity, and I explained that, while TLS protects all aspects of data security, it’s encryption doesn’t cover the authenticity.

    Anyhow, the point is rather moot because I’m pretty sure they won’t get a certificate for the IP anyways.


  • Public key crypto, properly implemented, does prevent MITM attacks.

    It does, but modern public key crypto doesn’t encrypt any client data (RSA key exchange was the only one to my knowledge). It also only verifies the certificates, and the topic was about payload data (i.e. the site you want to view), which asymmetric crypto doesn’t deal with for performance reasons.

    My post was not about “does TLS prevent undetected data manipulation” (it does), but rather if it’s the encryption that is responsible for it (it’s not unless you put AES-GCM into that umbrella term).



  • Laser@feddit.orgtoSelfhosted@lemmy.worldHow to selfhost with a VPN
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Let’s Encrypt are rolling out IP-based certs, you may wanna follow its development. I’m not sure if it could be used for your forwarded VPN port, but it’d be nice anyhow

    It shouldn’t be because you’re not actually the owner of the IP address. If any user could get a cert, they could impersonate any other.

    I believe encryption helps prevent tampering the data between the server and user too. It should prevent for example, someone MITM the connection and injecting malicious content that tells the user to download malware

    No, encryption only protects the confidentiality of data. You need message authentication codes or authenticated encryption to make sure the message hasn’t been transported tampered with. Especially stream ciphers like ChaCha (but also AES in counter mode) are susceptible to malleability attacks, which are super simple yet very dangerous.

    Edit: this post is a bit pedantic because any scheme that is relevant for LE certificates covers authenticity protection. But it’s not the encryption part of those schemes that is responsible.


  • Good luck on the journey! What I meant is that over time, you’ll realize that what you did was probably not the most elegant was to do something, at least that’s my experience with my config. Like, I started with a flake with an explicit config for each machine (basically multiple nixosConfigurations) and then turned it into a lib with functions to turn a set of hosts from json into an attribute set (kind of a simple inventory done). My last efforts that are still ongoing (cough) are splitting my NixOS modules off into a separate flake using flake-parts.

    I do understand you meant having the stuff that your need work, I just wanted to hint that the language is very powerful and as such, most configurations have room for improvement, as in learning to do things more efficient or do things that weren’t possible before.





  • While you might have a point somewhere, I’m not sure it applies in this particular case.

    PulseAudio was or still is (I don’t know actually) developed, but you don’t just change a system’s architecture.

    creating a new project is easy, and even getting that project into distros can be easier than evolving older projects.

    I think this downplays the achievements of PipeWire. Not only is it, contrary to what you write after, backwards-compatible; but if such a project was easy, why aren’t more people / companies doing it?

    In my opinion, PipeWire turned Linux systems from being last in multimedia to maybe first place even. Remember capturing the screen or a window before? In fact PipeWire was only extended to audio because the design proved itself so well, so it actually did evolve. Just not from audio to better audio, but from video to video and audio. Saying that starting such a project [edit: is easy] might be technically correct, but then doesn’t make any point.




  • Privileged ports can be used by processes that are running without root permissions.

    I guess you mean unprivileged ports?

    So if the sshd process would crash or stop for some other reason, any malicious user process could pretend to be the real ssh server without privilege escalation.

    Not really, except on the very first connection because you need access to the root-owned and otherwise inaccessible SSH host key, otherwise you’ll get the message a lot of people have probably seen after they reinstalled a system (something like “SOMEONE MIGHT BE DOING SOMETHING VERY NASTY!”).



  • The discussion was implicitly around the changes brought by Vulkan and DXVK which enabled playing Windows Direct3D (this part is important) 11 and later 9 games without performance penalty. You could previously play Windows Direct3D 9 titles using Gallium Nine if you had an AMD card, though this was a bit iffy.

    WoW mostly.

    That’s OpenGL, so not affected.

    Some StarCraft.

    Not 3D even.

    Minecraft.

    Neither Windows nor Direct3D, but Java with OpenGL.

    True, if all the games you played were OpenGL-accelerated, these changes didn’t matter. But about 95% of games on the market weren’t.


  • The thing is, back then, for the stuff to work on Debian, you needed to

    • compile your own newer kernel
    • compile the new mesa that depended on that kernel

    and with how frequent updates were, this was something you’d probably do multiple times per month – at this point, why bother with Debian when you need to compile all the packages yourself? Remember that was a gaming machine… so why bother with Debian and spend hours each month when with Arch, it was just a pacman -Syu followed by a reboot and you could try out all that fancy new stuff?


  • Yeah, but the post I replied to said “since 1998”. That is prior to bookworm.

    Personally, I don’t care for it too much. Every time I try it (which is rare) something annoys me. "DO NOT EDIT THIS FILE"s, deviation from upstream that renders official documentation less valuable. With Arch (which I don’t use anymore), you can be pretty sure that what’s on your machine is what’s currently released by upstream. This refers both to version and the software itself. Remember cdrkit? xscreensaver? The weak OpenSSH keys? Sure, these must notable examples are from long ago, but there were just so many issues over the course of my “career” that the distribution for me is somewhat burned. Also because all of this could have been easily avoided.

    Anyhow, use what you want, but it’s for sure not my favorite distro.



  • Agreed. But he’s also an abrasive know-it-all. A modicum of social skills and respect goes a long way towards making others accept your pet projects.

    This isn’t what I get when reading bug reports he interacts in. Yeah, sometimes he asks if something can’t be done another way – but he seems also very open to new ideas. I rather think that this opinion of him is very selective, there are cases where he comes off as smug, but I never got the impression this is the majority of cases.

    I wasn’t talking about the protocol, I was talking about the implementation: PulseAudio is a crashy, unstable POS. I can’t count the number of hours this turd made me waste, until PipeWire came along.

    PipeWire for audio couldn’t exist nowadays without PulseAudio though, in fact it was originally created as “PulseAudio for Video”; Pulse exposed a lot of bugs in the lower levels of the Linux audio stack. And I do agree that PipeWire is better than PulseAudio. But it’s important to see it in the context of the time it was created in, and Linux audio back then was certainly different. OSS was actually something a significant amount of people used…