Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.

in other words please help us, use our AI

  • kameecoding@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 hours ago

    I will try to have a balanced take here:

    The positives:

    • there are some uses for this “AI”
    • like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
    • it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
    • it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

    The negatives

    • the positives don’t justify the environmental externalities of all these AI companies
    • the positives don’t justify the pc hardware/silicone price hikes
    • shoehorning this into everything is capital R retarded.
    • AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
    • other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can’t block because it’s in the text output of it.

    Overall I wish the AI bubble burst already

    • ViatorOmnium@piefed.social
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 hour ago

      menial tasks that are important such as unit test coverage

      This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.

      • JoeBigelow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      They f’d up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.

    • Schal330@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      35 minutes ago

      it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there

      I agree with this point so much. I’m probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.

      I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.

    • arendjr@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      58 minutes ago

      So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.

      This is how the Philosophy of Balance works: We should strive…

      • for balance within ourselves
      • for balance with those around us
      • and ultimately, for balance with Life and the Universe at large

      But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.

      Pick a side.