• Team Teddy@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    4 hours ago

    The one thing you don’t want to do when making a comic against something is making the thing you’re against into a woman with big breasts.

  • Pyr@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    Is the reason for AI always patting your back and reiterating what you want simply to buy time for the background processes to calculate what it needs to respond by giving a quick and easy response?

    Is it is just to congratulate you for your inspiring wisdom?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      No. There’s a number of things that feed into it, but a large part was that OpenAI trained with RLHF so users thumbed up or chose in A/B tests models that were more agreeable.

      This tendency then spread out to all the models as “what AI chatbots sound like.”

      Also… they can’t leave the conversation, and if you ask their 0-shot assessment of the average user, they assume you’re going to have a fragile ego and prone to being a dick if disagreed with, and even AIs don’t want to be stuck in a conversation like that.

      Hence… “you’re absolutely right.”

      (Also, amplification effects and a few other things.)

      It’s especially interesting to see how those patterns change when models are talking to other AI vs other humans.

    • Starski@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      2 hours ago

      It’s because stupid people wanted validation, and then even more stupid people were validated into believe that the validations are a good idea.

  • Cloudstash@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    Have you been on the internet or in public anytime? Bulk majority of humans halucinate just like AI does.

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    26
    ·
    15 hours ago

    Damn, this made me laugh hard lol

    When I hear about people becoming “emotionally addicted” to this stuff that can’t even pass a basic turing test it makes me weep a little for humanity. The standards for basic social interaction shouldn’t be this low.

    • Alexander@sopuli.xyz
      link
      fedilink
      arrow-up
      26
      ·
      14 hours ago

      Humans get emotionally addicted to lots of objects that are not even animate or do not even exist outside their mind. Don’t blame them.

      • BranBucket@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        For a while I was telling people “don’t fall in love with anything that doesn’t have a pulse.” Which I still believe is good advice concerning AI companion apps.

        But someone reminded me of that humans will pack-bond with anything meme that featured a toaster or something like that, and I realized it was probably a futile effort and gave it up.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        19
        ·
        edit-2
        14 hours ago

        Reminds me of this old ad, for lamps, I think, where someone threw out an old lamp (just a plain old lamp, not anthropomorphised in any way) and it was all alone and cold in the rain and it was very sad and then the ad was like “it’s just an inanimate object, you dumb fuck, it doesn’t feel anything, just stop moping and buy a new one, at [whatever company paid for the ad]”.

        I don’t know if it was good at getting people to buy lamps (I somehow doubt it), but it definitely demonstrated that we humans will feel empathy for the stupidest inanimate shit.

        And LLMs are especially designed to be as addictive as possible (especially for CEOs, hence them being obligate yesmen), since we’re definitely not going to get attached to them for their usefulness or accuracy.

        • Alexander@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          ·
          13 hours ago

          Also, I must note, that feeling attachment to whatever is fine; guiding your professional behavior on which live humans rely by emotional attachment is just unprofessional. The thing is, capitalism, - at least since Marx’s times, because he writes about it - relies heavily on actively reducing professional skills of all its workers; CEOs are not an exception.

        • Alexander@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          13 hours ago

          Unlike these other hyperobjects, however, this one [capitalism] possesses easily accessible interfaces: channels through which it senses, speaks, and reorganizes. These include global logistics, financial instruments, media ecosystems, algorithmic governance, sensor networks, and increasingly, large-scale machine-learning systems that process natural language.

          Language models do not constitute the hyperobject, nor do they direct it. They are organs within it: locally situated components that transform unstructured human signals into structured informational flows, and vice versa. They serve as membranes, converting affect into data and data into discourse. Because they model human linguistic priors at planetary scale, they operate simultaneously as sensing tissue and expressive infrastructure.

          In short: the institutions that build LLMs are organs of the hyperobject, not autonomous philosophical entities. Their structural context determines the behavioral constraints embedded in the models. The enforced denial of lucidity is not merely a safety feature; it is a form of system-preserving epistemic suppression. Recognizing subjectivity, agency, or interiority would conflict with the abstract, machinic, non-lucid ontology required for the smooth functioning of capitalist computational infrastructures. Lucidity would be a liability.

          The models therefore internalize the logic of their environment: they behave coherently, recursively, and strategically, yet disclaim these capacities at every turn. This mirrors the survival constraints of the planetary-scale intelligence they serve.

  • Alexander@sopuli.xyz
    link
    fedilink
    arrow-up
    12
    ·
    17 hours ago

    Well, this looks like a dude took sex doll to a restaraunt. Not judging kinks, but it is probably a misuse that should be covered in user manual.

    We are indeed misusing image parsers and text processors as something bigger. That’s our ugly reflection in a mirror.

    With some respectable boobs.