Say no to ableism.

  • ExistentialNightmare@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    For what it’s worth, you are in the right here. AI is not inherently bad, it is the existence of capitalism which makes it be used for exploitation.

    Under socialism, it would be utilised for the benefit of society, and if AI can exist, it will inevitably exist therefore anyone who talks of eridicating AI is promoting silly utopic ideas. I’d prefer for it to be strictly limited to minimalising annoying, monotonous labour and service industies including of course, assisting the disabled.

    However I will say you’ll have an easier time convcining others of this if you argue your point without insults, as they are less likely to listen if you do. I’m actually curious, if you don’t mind disclosing that is, what do you use AI for in regards to helping you with your disability?

    • MeetMeAtTheMovies [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      if AI can exist, it will inevitably exist

      Unrelated to the question of ableism, this specifically logic that’s pushed by tech companies in general, that their decisions were inevitable and therefore there is no point in questioning them.

      Look at how modern LLMs work. They’re trained in large data centers owned by private companies using giant corpuses of data that were largely obtained without the permission or knowledge of the people who created it. Then, to use them, the weights are loaded into an amount of memory that’s out of reach for most consumer desktops and users must call into the LLM using an API. Working memory of a conversation doesn’t persist in between messages or tool calls, so the entire history must be loaded into its context window on every call. In other words, all the “learning” for these models must take place up front in training and outside of taking context into account, it doesn’t actually adjust to learn new things about the world. There are workarounds for this, of course, to simulate the experience of interacting with something that can learn, but they have their limitations and aren’t reliable yet. I could go on. Running probabilistic process on deterministic hardware is an area that we may see more work on soon.

      Every single step of that description had alternatives that would be more likely to be chosen outside of a capitalist system. They could be more eco friendly. They could be more efficient. They could be more powerful and learn from your interactions in way that persists. And a lot of these changes would delay the exposure of LLMs to the general public and see them spending longer in academia. But that would be okay because we wouldn’t have the profit motive at the center of this inflating a giant bubble that’s poised to pop and flatten the economy. Bottom line is this stuff was pushed out and hyped up well before it was ready and well before it was able to be scaled up ethically and with the working class in mind. None of this was inevitable.

      • ExistentialNightmare@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        That was a great read, genuinely a hell of a comment but you misunderstand mine, I did not mean inevitable in that way, I just meant the general existence of it as a technology was inevitable. We agree as far as I can tell.