• 4 Posts
  • 479 Comments
Joined 1 year ago
cake
Cake day: December 13th, 2024

help-circle

  • That doesn’t sound right unless you’re running the tape at faster than usual speed. Even high quality reel to reel tape is usually running at a speed that tops out around 20KHz. There’s also no reason to record frequencies much higher than that unless you’re trying to record ultrasound. A 96KHz sampling rate can record sound up to 48KHz. Considering even the best human hearing can’t hear above about 24KHz, there’s no reason to use that for music. It’s only if you’re recording something not meant for human hearing, like stress fractures, electric noise, or bird song, that you’d use a recording with that sampling rate.



  • hperrin@lemmy.catomemes@lemmy.worldWhy doesn't it feel as classy?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    Analog audio not being sampled doesn’t really matter. It’s like film, it can’t have infinite “resolution”. It’s the size of the granules on the tape and the speed the tape is moving that determines how good audio can sound. Grain size is kind of equivalent to floating point resolution, and tape speed is kind of equivalent to sampling rate. In order to get as true-to-life audio reproduction as 32-bit 96KHz PCM, you’d need absolutely wildly expensive tape and equipment. I’m not even sure if it’s physically possible.

    When you say by definition it includes “more data”, you have to think about what that data is. There’s signal, the stuff you want to record, and there’s noise, the stuff that gets on there that you didn’t want. The higher precision a digital recording is, the higher the signal-to-noise ratio. Unlike analog tape, there’s not really a theoretical upper limit (just the limits of your recording hardware). If you record with a high enough precision, you can record incredibly quiet or incredibly loud sounds, way out of the range of the best audio tape. Same with frequencies. The faster your sampling rate, the higher the frequencies you can record. And unlike tape, it’s not going to shred itself to pieces if you go really really high.

    Things sound “better” when you introduce noise because people like analog recordings. Not actual analog recordings, mind you, just the appearance of analog recordings. It has nothing to do with audio quality, it’s just vibes. It gives good vibes.


  • hperrin@lemmy.catomemes@lemmy.worldWhy doesn't it feel as classy?
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 days ago

    The debate is basically bogus. There are very few analog audio formats that can reproduce an audio signal more accurately than a CD, and even then, that’s only because CDs use a 44.1KHz sampling rate and 16bit encoding. There is no analog audio format that can rival a 32bit 96KHz PCM recording, and that’s not even the best digital recording available. CD chose 44.1KHz and 16bit because it’s nearly perfect for the range and sensitivity of human hearing. It’s only when you need to record ultrasound or extremely low amplitude sound that you would use something better.

    Fun fact: if you add some hisses and pops and a little bit of compression to CD audio before playing it, some people (me included) will say it sounds better.


  • hperrin@lemmy.catomemes@lemmy.worldWhy doesn't it feel as classy?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago

    A VHS physically can’t be better than CD audio. The tape would have to move faster than the VHS equipment is designed for. The Hi-Fi VHS audio system can come close to CD’s frequency range, but there is still about 70 dB signal-to-noise (compared to CD’s 98 dB), and there is always loss when writing to and reading from analog tape. CD is not destructively read, so any signal up to 22KHz will be reproducible the exact same way every time.

    Hi-Fi VHS audio is nearly as good as CD audio (the best consumer analog audio format, in fact), but it’s not as good. The simple fact is that an appropriately comparably sampled digital PCM recording will always beat an analog recording. You can read about the Nyquist-Shannon theorem for an actual proof, but basically CD audio is near-perfect for almost every human’s hearing range (most people can’t hear above 20KHz).


  • Feel free to try. Here’s the library I use: https://nymph.io/

    It’s open source, and all the docs and code are available at that link and on GitHub. I always ask it to make a note entity, which is just incredibly simple. Basically the same thing as the ToDo example.

    The reason I use this library (other than that I wrote it, so I know it really well) is that it isn’t widely known and there aren’t many example projects of it on GitHub, so the LLM has to be able to actually read and understand the docs and code in order to properly use it. For something like React, there are a million examples online, so for basic things, the LLM isn’t really understanding anything, it’s just making something similar to its training data. That’s not how actual high level programming works, so making it follow an API it isn’t already trained on is a good way to test if it is near the same abilities as an actual entry level SWE.

    I just tested it again and it made 9 mistakes. I had to explain each mistake and what it should be before it finally gave me code that would work. It’s not good code, but it would at least work. It would make a mistake, I would tell it how to fix it, then it would make a new mistake. And keep in mind, this was for a very simple entity definition.


  • I played around with it a lot yesterday, giving it documentation and asking it to write some code based on the API documentation. Just like every single other LLM I’ve ever tried, it just bungled the entire thing. It made up a bunch of functions and syntax that just doesn’t exist. After I told it the code was wrong and gave it the right way to do it, it told me that I got it wrong and converted it back to the incorrect syntax. LLMs are interesting toys, but shouldn’t be used for real work.



  • It’s integrated graphics so it uses up to half of the system RAM. I have 96GB of system ram, so 48GB of VRAM. I bought it last year before the insane price hikes, when it was within reach to normal people like me.

    I’ve tried it and it works. I can load huge models. Bigger than 48GB even. The ones bigger than 48GB run really slow, though. Like one token per second. But the ones that can fit in the 48GB are pretty decent. Like 6 tokens per second for the big models, if I’m remembering correctly. Obviously, something like an 8b parameter model would be way faster, but I don’t really have a use for those kinds of models.



  • I mean, I get that, but why is Proton offering one? What value do I get from Proton’s LLM that I wouldn’t get from any other company’s LLM? It’s not privacy, because it’s not end to end encrypted. It’s not features, because it’s just a fine tuned version of the free Mistral model (from what I can tell). It’s not integration (thank goodness), because they don’t have access to your data to integrate it with (according to their privacy policy).

    I kind of just hate the idea that every tech company is offering an LLM service now. Proton is an email and VPN company. Those things make sense. The calendar and drive stuff too. They have actual selling points that differentiate them from other offerings. But investing engineering time and talent into yet another LLM, especially one that’s worse than the competition, just seems like a waste to me. And especially since it’s not something that fits into their other product offerings.

    It truly seems like they just wanted to have something AI related so they wouldn’t be “left behind” in case the hype wasn’t a bubble. I don’t like it when companies do that. It makes me think they don’t really have a clear direction.

    Edit: it looks like they use several models, not just one:

    Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, GPT-OSS 120B, Qwen, Ernie 4.5 VL 28B, Apertus, and Kimi K2.

    - https://proton.me/support/lumo-privacy

    I have a laptop with 48GB of VRAM (a Framework with integrated Radeon graphics) that can run all of those models locally, so Proton offers even less value for someone in my position.








  • I meant I run it.

    But to answer your question, it uses subaddressing really well. When you give your email to a company, you add a label to the address just for that company, then all of their emails go in that label. You can easily toggle things like notifications, mark as read, and show in aggbox (our version of the inbox, since there isn’t really an inbox when everything is sorted already). Then if that company leaks your email, you can block that label.

    You can also set up screening labels that are meant for real people, then any new senders get screened to make sure they’re human before you get their mail.