• 0 Posts
  • 68 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • I think most of the work is in the fact that there often isn’t an “equivalent call”, and it can be quite a lot of code to make it work. One funny thing is the whole esync-fsync-ntsync issue, where synchronization is done differently on Linux and on windows, and translating it was a big performance hit, and difficult to do accurately. If I understood correctly, esync, fsync and ntsync were a series of kernel patches implementing additional synchronization code in the kernel, with ntsync actually replicating the windows style.




  • Framework let you swap everything

    I think there’s still a pretty big asterisk on that, because laptop parts are generally not built to be swappable… So I don’t think you can swap the CPU without the rest of the mainboard, and some parts like the CPU cooler are probably tied to the specific variant of mainboard and need to be swapped together if you want to switch CPUs.

    They do let you swap out parts that are reasonably swappable, so it’s pretty much a guarantee you’ll be able to upgrade storage and memory, and even where you can’t swap to different parts they make sure you can replace broken parts more granularly, so it still seems like a good deal.









  • Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.

    We probably won’t get better, but sounds like it’s still being trained on scraped data unless you explicitly opt out, including anything that may be getting mirrored by third parties that don’t opt out. Also, they can remove data from the training material retroactively… But presumably won’t be retraining the model from scratch, which means it will still have that in their weights, and the official weights will still have a potential advantage on models trained later on their training data.

    From the license:

    SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output.

    Oof, so they’re basically passing on data protection deletion requests to the users and telling them all to respectfully account for them.

    They also claim “open data”, but I’m having trouble finding the actual training data, only the “Training data reconstruction scripts”…


  • KubeRoot@discuss.tchncs.detolinuxmemes@lemmy.worldWinblows
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    That is kind of the issue - sure, there’s janky workarounds, using an outdated version of proprietary software to try to block parts of the system from working when you don’t want them to… But in the end, that’s just one problem of many, so I kinda just never came back to windows after the incident. I just responsibly regularly update my system, and probably have a better experience and lose less time just updating manually.


  • KubeRoot@discuss.tchncs.detolinuxmemes@lemmy.worldWinblows
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 months ago

    I do mind that it forces updates, in the sense that it decides when it’s going to start downloading them, even if I’m in the middle of things, and also it takes too long while blocking any ability to use the machine while installing. Let me pause the download without waiting an actual minute for the update screen to load, and figure out a way to install them without completely blocking my computer, dammit!


  • Literally the last two RSS items right now are about how splitting packages will require intervention for some users (plasma and Linux firmware).

    Maybe a nitpick, but the linux-firmware situation is different, it’s not about needing to install extra packages (they turned the existing package into a meta package or whatever it’s called), but about that coinciding with some changes that can break the upgrade process and require you to force uninstall a package before proceeding.

    But yeah, good point about plasma, the only differences I can even think of are that plasma is probably more popular, and definitely more important to have working.



  • It’s not being made “as painful as possible”, it’s just manual. Arch isn’t a distro that’ll preconfigure things for you so everything’s plug’n’play, it’s a distro that’ll give you access to everything and the power to use it however you like, but with that comes the expectation and responsibility to manage those things.

    Installing arch manually is simply a good lesson in how your system is set up, what parts it’s made up of, in part because you’re free to remove and switch out those parts.

    And sure, there’s no magic bullet to make sure a new user understands everything they did, but I think in the end, if you’re not willing to read, learn and troubleshoot, you might just want a different distro.




  • I think the trick might be that nothing is stopping you from using more than one 32-bit integer to represent addresses and the kernel maps memory for processes in the first place, so as long as each process individually can work within the 32-bit address space, it’s possible for the kernel to allocate that extra memory to processes.

    I do suppose on some level the architecture, as in the CPU and/or motherboard need to support retrieving memory using more than 32 bits of address space, which would also be what somebody else replied, and seems to be available since 1999 on both AMD and Intel.