• FEIN@lemmy.world
    cake
    link
    fedilink
    arrow-up
    3
    ·
    2 hours ago

    That would be preferable. If ML optimization open sources and progresses greatly that would be good for the little guy

  • TrippinMallard@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    OpenAI/Anthropic is incentivized to prevent this.

    They are also big enough and unregulated enough that they could use their power & industry relationships to drive up the price of local AI ownership (RAM, GPUs, etc)

    • orc girly@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      2 hours ago

      I’m not sure of how much they can actually prevent us from just running foss Chinese alternatives locally though

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        3
        ·
        2 hours ago

        Exactly, and a lot of big companies in US are heavily reliant on Chinese models already. For example, Airbnb uses Qwen cause they can self host it and customize it. Cursor built their latest composer model on top of Kimi, and so on. There are far more companies using these tools than making them, so while open models hurt companies that want to sell them as a service, they’re lowering the cost for everyone else.

      • TrippinMallard@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        Not for everyone, but they are aiming at increasing hardware ownership costs so more people can’t afford local AI

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      4 hours ago

      Do elaborate. The tech industry has gone through many cycles of going from mainframe to personal computer over the years. As new tech appears, it requires a huge amount of computing power to run initially. But over time people figure out how to optimize it, hardware matures, and it becomes possible to run this stuff locally. I don’t see why this tech should be any different.