I came across this article in another Lemmy community that dislikes AI. I’m reposting instead of cross posting so that we could have a conversation about how “work” might be changing with advancements in technology.

The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work “you and I do today” (including Altman himself), doesn’t look like work.

The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.

In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.

Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.

As humanity’s core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn’t seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.

I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they’re made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.

These days we have fewer bookkeepers - most companies don’t need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.

How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn’t have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.

At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    your error/hallucination rate is like 1/10th of what I’d expect. I’ve been using an AI assistant for the better part of a year,

    I’m having AI write computer programs, and when I tried it a year ago I laughed and walked away - it was useless. It has improved substantially in the past 3 months.

    CONSTANTLY reinforcing fucking BASIC directives

    Yes, that is the “limited context window” - in my experience people have it too.

    I have given my AIs basic workflows to follow for certain operations, simple 5 to 8 step processes, and they do them correctly about 19 times out of 20, but that 5% they’ll be executing the same process and just skip a step - like many people tend to as well.

    but a human can learn

    In the past week I have been having my AIs “teach themselves” these workflows and priorities. Prioritizing correctness over speed, respecting document hierarchies when deciding which side of a conflict needs to be edited, etc. It seems to be helping somewhat. I had it research current best practices on context window management and apply it to my projects, and that seems to have helped a little too. But, while I type this, my AI just ran off and started implementing code based on old downstream specs that should have been updated to reflect top level changes we just made, I interrupted it and told it to go back and do it the right way, like its work instructions already tell it to. After the reminder it did it right : limited context window.

    The main problem I have with computer programming AIs is: when you have a human work on a problem for a month, you drop by every day or two to see how it’s going, clarify, course correct. The AI does the equivalent work in an hour and I just don’t have the bandwidth to keep up at that speed, so it gets just as far off in the weeds as a junior programmer locked in a room and fed Jolt cola and Cheetos through a slot in the door would after a month alone.

    An interesting response I got from my AI recently regarding this phenomenon was: it provided “training seminar” materials for our development team telling them how to proceed incrementally with the AI work and carefully review intermediate steps. I already do that with my “work side” AI project, it didn’t suggest it. My home side project where I normally approve changes without review is the one that suggested the training seminar.