• MoogleMaestro@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 小时前

    One issue I have with the EFF currently is that I personally believe that section 230 probably shouldn’t protect algorithmic (as in, personalized or AI based) recommendation systems that go beyond rudimentary data sorting unless the training data for these systems are open source (and, therefore, auditable by lawyers or the government).

    My only reason for mentioning this is to bounce off a discussion that Jon Stewart had with EFF rep Cindy Cohn in which the discussion resulted in the debate to what limits could be in place for recommendation systems. The interview is fascinating so you should watch it anyway, regardless of which side you take.

    However, the EFF seems to think that recent attacks to Facebooks’ ability to target vulnerable teens with targeted advertisements or content that increase self-shaming should be covered under section 230 based on that interview alone, arguing that the California lawsuit was a step in the wrong direction for Internet rights – however, I disagree with this assessment and I think that, ultimately, putting restrictions on AI recommendation systems will actually help correct injustice in the current social media landscape for two key reasons.

    1 - Recommendations are essentially endorsement. If facebook pushes political posts in your direction or if twitter feeds you elon musk posts or if youtube gives video recommendations based on videos you’ve liked in a relational way, they are effectively endorsing the content in a way the exceeds user agency or discoverability that section 230 is supposed to protect. In other words, I have a hard time feeling sympathy for social media companies that are using complex advertising data that they purchase (or sell) to push posts that they presume will keep users engaged in the platform and I think that goes beyond the section 230 assumption of UGC and how it is displayed. So, in contrast, a user subscribing to content from another user or community and seeing that as a result of a simple sorting algorithm should be section 230 covered as there’s no blatant endorsement of the content; you’re simply showing the user what they requested to see in the order that makes sense from a chronological or “likes per second” or whatever quantifiable metric you can provide.

    2 - Algorithmic recommendations are lopsided and favor big data aggregators. This is something we deal with often when talking about the fediverse to regular non-technical people: Big data gets the benefit of hawking “assumptive” content to the users due to their scale and ability to buy advertising information on an individual, which leads to a scenario where they can architect the illusion of “activity” that does not reflect real-time activity. You see this in reddit and the like, where these services use the opaque nature of their infinite feeds to effectively make it appear that conversions are always ongoing and that there’s no end to the discussion, which effectively locks in user engagement. I believe that these policies are enabled by the protections of section 230 as feeding a user down a negative or toxically positive feedback loop is rewarded in the market while other, more honest services (like lemmy, mastodon) that refuse to engage in this type of behavior will be unable to compete due to the scale of data needs (you need a lot of data to make assumptions on what people want to see) and the inability to capture user attention in a similarly hostile way. In other words, I don’t believe it’s true that section 230 protects both small-internet and big-internet alike.

    Granted, this isn’t to say that section 230 should be overturned, but I think an honest discussion on what level of protections it grants and extending it to make big tech companies a bit more wary of pushing questionable policies would actually help the open web compete against big-data web.

    • schnurrito@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 小时前

      I’ve expressed the same opinion a few times before. I’ve also read (even shared) this article that argues the opposite: https://www.techdirt.com/2026/02/23/yes-section-230-should-apply-equally-to-algorithmic-recommendations/

      It’s something that needs careful thought and consideration, which isn’t something I trust current politicians to do. :(

      arguing that the California lawsuit was a step in the wrong direction for Internet rights

      Do you mean the one where it was found that there was liability for being “addictive”? (And wasn’t that in New Mexico, not California, or were there two of them, I don’t remember?) If so, then I definitely disagree with you on that, that was a bad decision. I shouldn’t be allowed to collect damages from my fediverse instance admin because I like to hang out here too much, which is basically what was decided there.