• Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    What? People would rather have their balls licked by AI rather than have some neckbeard moderator change the entire language of their question and not answer shit? Fuck SO. That shit was so ass to interact with.

  • IndustryStandard@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    Respect for StackOverflow not selling out to an AI training company despite being their biggest source of training. But their moderation still sucks.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    27
    ·
    10 hours ago

    It’s not that developers are switching to AI tools it’s that stack overflow is awful and has been for a long time. The AI tools are simply providing a better alternative, which really demonstrates how awful stack overflow is because the AI tools are not that good.

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 hours ago

      Undoubtedly. But you agree that the crowdsourced knowledge base of existing answers is useful, no? That is what the islop searches and reproduces. It is more convenient than waiting for a rude answer. But I don’t think islop will give you a good answer if someone has not been bothered answer it before in SO.

      islop is a convenience, but you should fear the day you lose the original and the only way to get that info is some opaque islop oracle

      • Ledivin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        Most answers on SO are either from a doc page, are common patterns found in multiple books, or is mostly opinion-based. Most code AIs are significantly better at the first two without even being trained on SO (which I wouldn’t want anyway - SO really does suck nowadays)

    • MoogleMaestro@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Will the AI still flame me if I ask the wrong question?

      Is nothing sacred anymore?

      Real talk though, it is concerning when it feels like 3/5 times you ask AI something, you’ll get a completely hair brained answer back. SO will probably need to clamp down non-logged in browsing and enforce API limits to make sure that AI trainers are paying for the data they need.

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        Depends on the model, I think Opus 4.5 is the only model that I’ve prompted which is getting close to not just being a boring sycophant.

  • eronth@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    48
    ·
    1 day ago

    Honestly just funny to see. It makes perfect sense, based on how they made the site hostile to users.

    • ByteOnBikes@discuss.online
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      10 hours ago

      I was contributing to SO in 2014-2017 when my job wanted our engineers to be more “visible” online.

      I was in the top 3% and it made me realize how incredibly small the community was. I was probably answering like 5 questions a week. It wasn’t hard. For some perspective, I’m making like 4-5 posts on Lemmy A DAY.

      What made me really pissed was how often a new person would give a really good answer, then some top 1% chucklefuck would literally take that answer, rewrite it, and then have it appear as the top answer. And that happened to me constantly. But again, I didn’t care since I’m just doing this to show my company I’m a “good lil engineer”.

      I stopped participating because of how they treated new users. And around 2020(?), SO made a pledge to be not so douchy and actually allow new users to ask questions. But that 1% chucklefuck crew was still allowed to wave their dicks around and stomp on people’s answers. So yeah, less “Duplicate questions”, more “This has been answered already [link to their own answer that they stole]”.

      So they removed the toxic attitude with asking questions, but not the toxicity when answering. SO still had the most sweaty people control responses, including editing/deleting them. And you can’t grow a community like that.

  • Wispy2891@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    Already before the LLMs for me it was the last chance before I would post over there. The desperation move. It was too toxic and I would always get pissed to get my question closed because too similar or too easy or whatever. Hey I wasted 15 minutes to type that, if the other question solved the problem I wouldn’t post again…

    In the beginning it wasn’t like that…

    I went to watch my stack overflow account and the first questions that I posted (and that gave me 2000 karma) would have been almost all of them rejected and removed

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    1 day ago

    I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.

    the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      Works well for now. Wait until there’s something new that it hasn’t been trained on. It needs that Stack Exchange data to train on.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow

      • cherrari@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I don’t think so. All AI needs now is formal specs of some technical subject, not even human readable docs, let alone translations to other languages. In some ways, this is really beautiful.

        • 123@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          23 hours ago

          Technical specs don’t capture the bugs, edge cases and workarounds needed for technical subjects like software.

          • cherrari@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I can only speak for myself obviously, and my context here is some very recent and very extensive experience of applying AI to some new software developed internally in the org where I participate. So far, AI eliminated any need for any kind of assistance with understanding and it was definitely not trained on these particular software, obviously. Hard to imagine why I’d ever go to SO to ask questions about this software, even if I could. And if it works so well on such a tiny edge case, I can’t imagine it will do a bad job on something used at scale.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          1 day ago

          Lol no, AI can’t do a single thing without humans who have already done it hundreds of thousands of times feeding it their data

          • okmko@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            23 hours ago

            I used to push back but now I just ignore it when people think that these models have cognition because companies have pushed so hard to call it AI.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          20 hours ago

          The whole point of StackExchange is that it contained everything that isn’t in the docs.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          It can’t handle things it’s not trained on very well, or at least not anything substantially different from what it was trained on.

          It can usually apply rules it’s trained on to a small corpus of data in its training data. Give me a list of female YA authors. But when you ask it for something more general (how many R’s there are in certain words) it often fails.

          • webadict@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Actually, the Rs issue is funny because it WAS trained on that exact information which is why it says strawberry has two Rs, so it’s actually more proof that it only knows what it has been given data on. The thing is, when people misspelled strawberry as “strawbery”, then naturally, people respond, " Strawberry has two Rs." The problem is that LLM learning has no concept of context because it isn’t learning anything. The reinforcement mechanism is what the majority of its data tells it. It regurgitates that strawberry has two Rs because it has been reinforced by its dataset.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Interesting story, but I’ve seen the same work with how many ass in assassian

              you can probe the stuff it’s bad at, and a lot of it doesn’t line up well with the story that it’s how people were corrected.

              • webadict@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                23 hours ago

                But that’s exactly how an LLM is trained. It doesn’t know how words are spelled because words are turned into numbers and processed. But it does know when its dataset has multiple correlations for something. Specifically, people spell out words, so it will regurgitate to you how to spell strawberry, but it can’t count letters because that’s not a thing that language models do.

                Generative AI and LLMs are just giant reconstruction bots that take all the data they have and reconstruct something. That’s literally what they do.

                Like, without knowing what your answer is for assassin, I will assume that your issue is that the question is probably “How many asses are in assassin?” But, like, that’s a joke. Assassins only has one ass, just like the rest of us. That’s a joke. And nobody would ever spell assassin as assin, so why would it learn that there are two asses in assassin?

                I’m confused where you are getting your information from, but this is not particularly special behavior.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

      Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Probably explains why quora started sending me multiple daily emails about shit i didn’t care about and removed unsubscribe buttons form the emails.

        I don’t delete many accounts… but that was one of them

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          It’ll certainly be of lesser quality even if they go through steps to make it able to address it.

          good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    75
    ·
    edit-2
    1 day ago

    This is not because AI is good at answering programming questions accurately, it’s because SO sucks. The graph shows its growth leveling off around 2014 and then starting the decline around 2016, which isn’t even temporally correlated with LLMs.

    Sites like SO where experienced humans can give insightful answers to obscure programming questions are clearly still needed. Every time I ask AI a programming question about something obscure, it usually knows less than I do, and if I can’t find a post where another human had the same problem, I’m usually left to figure it out for myself.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      2016 is probably when they removed freedom by introducing aggressive moderation to remove duplicates and ban people

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        It was a toxic garbage heap way before 2016. I remember creating an account to try building karma there back in about 2011 when doing that was seen as a good way to land senior job roles. Gave up very quickly.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    1 day ago

    Even before AI I stopped asking any questions or even answering for that matter on that website within like the first few months of using it. Just not worth the hassle of dealing with the mods and the neck beard ass users and I didn’t want my account to get suspended over some BS in case I really needed to ask an actual question in the future, now I can’t remember the last time I’ve been to any stack website and it does not show up in the Google search results anymore, they dug their own grave

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      I stopped using it once I found out their entire business model was basically copyright trolling on a technicality that anyone who answers a question gives them the copyright to the answer, and using code audits to go after businesses that had copy/pasted code. Just left a bad taste in my mouth, even beside stopping using it for work even though I wasn’t copy/pasting code.

      And even before LLMs, I found ignoring stack exchange results for a search usually still got to the right information.

      But yeah, it also had a moderation problem. Give people a hammer of power and some will go searching for nails, and now you don’t have anywhere to hang things from because the mod was dumber than the user they thought they needed to moderate. And now google can figure out that my question is different from the supposed duplicate question that was closed because it sends me to the closed one, not the tangentially related question the dumbass mod thought was the same thing. Similar energy to people who go to help forums and reply useless shit like RTFM. They aren’t really upset at “having” to take time to respond, they are excited about a chance to act superior to someone.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 day ago

      The humans of StackOverflow have been pricks for so long. If they fixed that problem years ago they would have been in a great position with the advent of AI. They could’ve marketed themselves as a site for humans. But no, fuckfacepoweruser found an answer to a different question he believes answers your question so marked your question as a duplicate and fuckfacerubberstamper voted to close it in the queue without critically thinking about it.

      • theolodis@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        I used to moderate and answer questions on SO, but stopped because at some point you see the 500th question about how to use some javascript function.

        Of course I flagged them all as duplicate and linked them to an extensive answer about the specific function, explaining all aspects and edge cases, because I don’t think there need to be 500 similatlr answers (who’s going to maintain them?)

        But yeah, sorry that I didn’t fix YOUR code sample, and you had to actually do your homework by yourself.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          My questions weren’t homework problems with 500 duplicates. Maybe that type of shit being the most common in the vote to close queue is why fuckfacerubberstamper can’t be bothered to actually think about what they’re closing as dupes.

      • ramjambamalam@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        If the alternative is the cesspit that is Yahoo Answers and Quora, I’ll take the heavy-handed moderation of StackOverflow.

          • ramjambamalam@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            Of course there’s a middle ground, that’s much closer in my ideal world to StackOverflow than it is to Yahoo Answers or Quora.

            • JackbyDev@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              Like Lemmy? The site we’re all using?

              But no my point wasn’t about a specific site, it’s about the moderation approach. Do you really think there’s no middle ground in approach to moderation between Yahoo Answers and StackOverflow?

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      Hear hear, it was the hostile atmosphere that pushed me away from Stack Exchange years before LLMs were a thing. That very clear impression that the site does not exist to help specific people, but a vague public audience, and the treatment of every question and answer is subjugated to that. Since then I just ask/answer questions on platforms like Lemmy, Reddit, Discord, or the Discourse forums ran by various organisations, it’s a much more pleasant experience.

    • dgmib@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      The stupidest part is that their aggressive hostility against new questions means that the content is becoming dated. The answers to many, many questions will change as the tech evolves.

      And since AI’s ability to answer tech questions depends heavily on a similar question being in the training dataset, all the AIs are going to increasingly give outdated answers.

      They really have shot themselves in the foot for at best some short term gain.

    • THE_GR8_MIKE@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      This was my issue. The two times I posted real, actual questions that I needed help with, and tried to provide as much detail as possible while saying I didn’t understand the subject,

      I got clowned on, immediately downvoted negative, and got no actual help whatsoever. Now I just hope someone else had a similar issue.