• 4 Posts
  • 271 Comments
Joined 3 年前
cake
Cake day: 2023年7月1日

help-circle

  • Like I said plenty of products call this tab completion, and it’s context aware completion, or predictive completion. I used an overloaded term but I would have thought after my explanation you would have understood what I meant by this point. You’re continued explanation of classic tab completion is shows otherwise.

    and way predates whatever VS Code may have been doing

    Also I said Visual Studio, not VS Code. 🤦

    Secondly, even if you want to move the goal post by talking about some specific implementation of ML based indexing, ML is not LLM.

    I very specifically said that it was ML based, The word was indicates past tense. 🤦

    “Modern versions of it are almost entirely LLM based.”

    I don’t know how you managed to completely skip reading that last line?

    Here we are though arguing over reading comprehension issues. Which honestly is pretty classic for the internet.


  • I mean, fundamentally, yeah.

    But we live in a corporate controlled, corrupt, world and now of these larger companies can be trusted with this process.

    Some smaller communities and platforms DO this right sometimes, as they build in house processu that respect privacy. But governments world wide are making this impossible through increasingly strict compliance requirements that actually increase data privacy risks and funnel these needs to 3rd party services who just lie about what they do with the data.

    ===========

    I’m not kidding when I say this is a REAL BIG PROBLEM.

    bot based traffic and astroturfing will supplement and replace human communication on platforms like Lemmy. Driving the narrative and how we engage to the whims of a few rich people. Bots are relatively cheap, and easy to deploy at scale across many platforms.

    There will be no open corner of the internet safe from manipulation and forced division. More people will be forced into walled gardens from corps that implement human verification, as they are the only ones with the resources to do something (While also being the source of the problem, see how that works?)

    How do you carve out spaces that are protected from that? Well, you need to determine who’s a bot, and who’s and actual person.

    But we can’t do that, so the alternative is we are ran over by bots and astroturfing till we’re at each other’s throats like good culture war puppets.

    The future is bleak…














  • Did you go to the repo before running your mouth? It’s awesome-selfhosted data.

    What AI slop?


    Edit:

    I’m guessing I must have missed something here when I made that comment. I visited the link in the body of the OP not once, or twice, but three times to verify I wasn’t losing my mind. Even went into reading the readme, some issues…etc to verify.

    I’m now realizing that in my Lemmy client the link in the body is more obvious to click on than the actual article itself.