I write about technology at theluddite.org

  • 1 Post
  • 3 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • That would be a really fun project! It almost reads like the setup for a homework problem for a class on chaos and nonlinear dynamics. I bet that as the model increasingly takes into account other people’s (supposed?) preferences, you get qualitative breaks in behavior.

    Stuff like this is why I come back to postmodernists like Baudrillard and Debord time and time again. These kinds of second- (or Nth-) order “news” are an artifact of the media’s constant and ever-accelerating commodification of reality. They just pile on more and more and more until we struggle to find reality through the sheer weight of its representations.


  • Really liked this articulation that someone shared with me recently:

    here’s something you need to know about polls and the media: we pay for polls so we can can write stories about polls. We’re paying for a drumbeat to dance to. This isn’t to say polls are unscientific, or false, or misleading: they’re generally accurate, even if the content written around marginal noise tends to misrepresent them. It’s to remind you that when you’re reading about polls, you’re watching us hula hoop the ourobouros. Keep an eye out for poll guys boasting about their influence as much as their accuracy. That’s when you’ll know the rot has reached the root, not that there’s anything you can do about it.


  • This article is a mess. Brief summary of the argument:

    • AI relies on our collective data, therefore it should be collectively owned.
    • AI is going to transform our lives
    • AI has meant a lot of things over the years. Today it mostly means LLMs.
    • The problems with AI are actually problems with capitalism
    • Socialist AI could be democratically accountable, compensate people from whom they use data, etc.
    • Socialists have always held that technology should be liberatory, and we should view AI the same way
    • Some ideas for how to govern AI

    I think that this argument is sloppily made, but I’m going to read it generously for the purposes of this comment and focus on my single biggest disagreement: It misunderstands why LLMs are such a big deal under capitalism, because it misunderstands the interplay between technology and power. There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what is possible in the ongoing and continuous renegotiation of power within them. LLMs appear useful because we live under capitalism, and we think about technology within a capitalist framework. Their primary use case is to allow capitalists to exert more power over labor.

    The author compares LLMs to machines in a factory, but machines produce things, and LLMs produce language. Most jobs involve producing language as a necessary byproduct of human collaboration. As a result, LLMs allow capitalists to discipline labor because they can “do” some enormous percentage of most jobs, if you think about human collaboration in the same way that you think about factories. The problem is that human language is not a modular widget that you can make with a machine. You can’t automate away the communication within human collaboration.

    So, I think that author makes a dangerous category error when they compare LLMs to factory machines. That is how capitalists want us to think of LLMs because it allows them to wield them as a threat to push wages down. That is their primary use case. Once you remove the capitalist/labor power dynamic, then LLMs lose much of their appeal and become just another example of for profit companies mining public goods for private profit. They’re not a particularly special case, so I don’t think that it requires the special treatment in the way that the author lays out, but I agree that companies shouldn’t be allowed to do that.

    I have a lot of other problems with this article, which can be found in my previous writing, if that interests you: