• FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 小时前

    “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

    See, I never understood this. Mine could never even follow simple instructions lol

    Like I say “Give me a list of types of X, but exclude Y”

    "Understood!

    #1 - Y

    (I know you said to exclude this one but it’s a popular option among-)"

    lmfaoooo

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 小时前

      I’ve experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so “whose a good boy!!!” annoying.

      People don’t talk like these chatbots do, their training data that was stolen from humanity definitely doesn’t contain that, that is “behavior” included by the providers to try and make sure that people get as hooked as possible

      Gotta make back those billions of investments on a dead end technology somehow

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 小时前

      That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

      -You have a conversation with a model.

      -Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.

      -You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.