Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I’m disappointed in framework’s answer so far

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    20 days ago

    It’s a barrier to entry. While it may not be difficult to overcome that’s still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?

    • vzqq@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      19 days ago

      No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.

      It’s a staggering admission of ignorance.

    • rowdy@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      20 days ago

      It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.