Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.

The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 minutes ago

      This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.

  • manuallybreathing@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    33 minutes ago

    3 years of incrimental advances, and in another 3 years? Easier access to tools you can abuse strangers online with, full self driving is just five years away

  • iagomago@feddit.it
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    As a teacher in a school that has been quite aggressively pushing AI down our curriculum, I have to close an eye in regard to it when it comes to a simple factor of education as a work environment: bureaucracy. Gemini has so far been a lifesaver in checking the accuracy of forms, producing standardized and highly-readable versions of tests and texts, assessment grids and all of the menial shit that is required for us to produce (and which detracts a substantial amount of time from the core of the job, which would be working with the kids).

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      21 minutes ago

      I mean, the bitter truth of all this is the downsizing and resource ratcheting of public schools creating an enormous labor crisis prior to the introduction of AI. Teachers were swamped with prep work for classes, they were expected to juggle multiple subjects of expertise at once, they were simultaneously educator and disciplinarian for class sizes that kept mushrooming with budget cuts. Students are subject to increasingly draconian punishments that keep them out of class longer, resulting in poorer outcomes in schools with harsher discipline. And schools use influxes of young new teachers to keep wages low, at the expense of experience.

      These tools take the pressure off people who have been in a cooker since the Bush 43 administration and the original NCLB school privatization campaign. AI in schools as a tool to bulk process busy work is a symptom of a deeper problem. Kids and teachers coordinating cheating campaigns to meet arbitrary creeping metrics set by conservative bureaucrats are symptoms of a deeper problem. The education system as we know it is shifting towards a much more rigid and ideologically doctrinaire institution, and the endless testing + AI schooling are tools utilized by the state to accomplish the transformation.

      Simply saying “No AI in Schools” does nothing to address the massive workload foisted on faculty. It does nothing to address how Teach-The-Test has taken over the educational philosophy of public schooling. And it does nothing to shrink class sizes, to maintain professional teachers for the length of their careers (rather than firing older teachers to keep salaries low), or to maximize student attendance rates - the three most empirically proven techniques to maximizing educational quality.

      AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.

  • SharkStudiosSK@lemmy.draktis.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    This may be unpopular opinion but in my class today, even the teacher was using ai… to prepare the entire lecture. Now i believe that learning material shoult be prepared by the teacher not some ai. Honestly i see everybody using ai today to make the learning material and then the students use ai to solve assigments. The way its heading the world everybody will just kinda “represent” an ai, not even think for themselves. Like sure use ai to find information quickly or something but dont depend on it entirely.

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      49 minutes ago

      I asked a lecturer some question, I think it was what happens when bit shifting signed integers.
      He asked an LLM and read the answer.
      Similarly he asked an LLM how to determine memory size allocated by malloc. It said that it was not possible, and that was the answer. But a 2009 answer from stack overflow begged to differ.
      At least he actually tried it out when I told him.

      But at this point I even had my father send me an LLM written slop that was clearly bullshit (made up information about non-existent internal system at our college), which he probably didn’t even read as he copied everything including “AI answers may be inaccurate.”

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    Children don’t yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.

    You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don’t get.

    Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      44 minutes ago

      My experience is most adults don’t know how to search the internet for information either lol.

      Also I haven’t been in school for over a decade at this point, but the internet was ubiquitous and they didn’t teach shit about it, the classes that were adjacent (like game design) were run by a coach who barely knew how to work the macs we were forced to use.

      Nor was critical thinking an important part of the teaching process, very rarely was the “why” explained, they’re just trying to get through all the material required to prepare you for the state tests which determine if you move onto the next grade.

    • LwL@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      I wonder if this might not be exactly the correct approach to teach them, though. When there’s actually someone to tell them “sorry that AI answer is bullshit”, so they can learn how to use it as a ressource rather than an answer provider. Adults fail at it, but they also don’t have a teacher (and kids aren’t stupid, just inexperienced).

  • khaleer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Idk, comment section is like generated thread of people who are like “well maybe it’s beggining of digital age and smart management of people”. The fuck you took today, it’s made to exploit everything and everyone, not to make your life better lmao

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 hours ago

    The very same people, who called me stupid for thinking typing will be a more important skill that “pretty writing” now think art education is obsolete, because you can just ask a machine for an image.

    AI stands for “anti-intellectualism”.

    • termaxima@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      It seems writing things by hand is better for memorization, and it certainly feels more personal and versatile in presentation.

      I write lots of things by hand. Having physical papers is helpful, I find, to see lots of things at once, reorganise, etc. I also like being able to highlight, draw on things, structure documents non-linearly…

      I’m a computer scientist, so I do value typing immensely too. But I find it too constraining for many reasoning tasks, especially for learning or creativity.

    • dukemirage@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Handwriting’s still important. “Pretty” usually means legible, too, and the point of art education is not to be able to confidently produce pictures.

        • dukemirage@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 hours ago

          never left a note for someone in that time? a few quick thoughts before you forget? a list, some date, when a device is out of reach/battery/service?

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    9 hours ago

    Schools generally buy anything microsoft offers with the little budget they have.

    This time it’s messed up tho. Allowing chatbots in schools will hurt education more than the entire pandemic and the effects only gets worse each year.

    Why did any school higher ups pay to implement these? There is a small hint of “Screw you, I got mine” is the only explanation I can think of.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      Schools generally buy anything microsoft offers with the little budget they have.

      Far more Pearson than Microsoft. The “teach to the test” regime is all about selling schools test prep material that effectively tells you the answers to the next round of Pearson-written standardized exams. I’m sure Pearson is eagerly integrating with Microsoft AI tools, so they can cut their own internal staffing and roll out more profitable digital variations of their material.

      But schools pay top dollar for these resources because state administrators use exam scores as a benchmark for school funding. So the $10M you pay for test prep material may determine the next $50M in funding your school receives, relative to the poorer districts that couldn’t afford to buy answers in advance.

      Why did any school higher ups pay to implement these?

      Tons of kickbacks to high ranking administrators, double-dealing with teachers being contracted or poached by Pearson for test-writing gigs, state administrators moving between jobs in the school board/legislature and positions within Pearson, people with stock and other debt instruments that profit when Pearson does well…

      FFS, the Houston ISD takeover by the State of Texas ended with a Colorado private school management guy sending tens of millions of dollars from the Houston public schools to pay consulting fees to Colorado private school agencies. That’s as corrupt as it comes.

  • PierceTheBubble@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    11 hours ago

    It becomes more apparent to me everyday, we might be headed towards a society, dynamically managed by digital systems; a “smart society”, or rather a Society-as-a-Service. This seems to be the logical conclusion, if you continue the line of “smart buildings” being part of “smart cities”. With use of IoT sensors and unified digital platforms, data is continuously being gathered on the population, to be analyzed, and its extractions stored indefinitely (in pseudonymized form) by the many data centers, currently being constructed. This data is then used to dynamically adapt the system, to replace the “inefficient” democratic process and public services as a whole. Of course the open-source (too optimistic?) model used, is free of any bias; however nobody has access to the resources required to verify the claim. But given big-tech, historically never having shown any signs of tyranny, a utopian outcome can safely be assumed… Or I might simply just be a nut, with a brain making nonsensical connections, which have no basis in reality.

    • killabeezio@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      Recently had to lay someone off because they just weren’t producing the work that needed to be done. Even the simplest of tasks.

      I would be like we need to remove/delete these things. That’s it. It took some time because you had to just do some comparison and research, but it was a super difficult task for them.

      I would then give them something more technical, like write this script and it was mostly ok, but much better work than the simple tasks I would give.

      Then I would get AI slop and I would ask WTF are you thinking here. Why are you doing this? They couldn’t give a good answer because they didn’t actually do the work. They would just have LLMs do all their work for them and if it requires them to do any sort of thinking, they would fail miserably.

      Even in simple PR reviews, I would leave at least 10 comments just going back and forth. Got to the point where it was just easier if I would have done it myself. I tried to mentor them and guide them along, but it just wasn’t getting through to them.

      I don’t mind the use of LLMs, but use it as a tool, not a crutch. You should be able to produce the thing you are giving the llm to produce for you.

      • JeeBaiChow@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Same. My guy couldnt authenticate a user against a password hash, even after i gave him the source code. Its like copying homework - you just shoot yourself in the foot for later.

      • Jankatarch@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        9 hours ago

        There is a funny two-way filtering going on in here.

        Job applications are auto-rejected unless they go over how “AI will reshape the future and I am so excited” as if it’s linkedin.

        Then the engineers that do the interviews want people interested in learning about computers through years of hard work and experience?

        Just doesn’t work out.

        • JeeBaiChow@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          Problem is, people are choosing careers based on how much it will pay them, instead of things they want to do/ are passionate about. Its rare nowadays to have candidates who also have hobby work/ side projects related to the work. At least by my reckoning.

          • Bronzebeard@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 hours ago

            Problem is most jobs don’t pay enough anymore. So people don’t have the luxury of picking what they’re passionate about, they have bills to pay. Minimum wage hasn’t raised in 16 years. It wasn’t enough 16 years ago. It’s now buys only 60% of what it did back then. This is the floor all other wages are based on. If the for doesn’t raise, things above it won’t keep up either.

      • JeeBaiChow@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        15 hours ago

        Ths seniors can tell. And even if you make it into the job, itll be pretty obvious the first couple of days.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          14 hours ago

          I interview juniors regularly. I can’t wait until the first time I interview a “vibe coder” who thinks they’re a developer, but can’t even tell me what a race condition is or the difference between synchronous and asynchronous execution.

          That’s going to be a red letter day, lemme tell ya.

  • undrwater@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    16 hours ago

    I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a “no Wikipedia” policy, and required primary sources.

    My kids just graduated high school, and they were told NOT to use LLM’s (though some of their teachers would wink). Their current college professors use LLM detection software.

    AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.

    Still, those students who WANT to learn will not be held back by AI.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      ·
      10 hours ago

      Still, those students who WANT to learn will not be held back by AI.

      Our society probably won’t survive if only the students who want to learn do so. 😔

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      ·
      14 hours ago

      I always saw the rules against Wikipedia to be around citations (and accuracy in the early years), rather than it harming learning. It’s not that different from other tertiary sources like textbooks or encyclopedias. It’s good for learning a topic and the interacting pieces, but you need to then search for primary/secondary sources relevant to the topic you are writing about.

      Generative AI however

      • is a text prediction engine that often generates made up info, and then students learn things wrong
      • does the writing for the students, so they don’t actually have to read or understand anything
      • Bronzebeard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Encyclopedias in general are not good sources. They’re too surface level. Wikipedia is a bad source because it’s an encyclopedia not because it’s crowd sourced.

      • prole@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        You don’t even need to search, just scroll down to the “references” section and read/cite them instead.

      • Disillusionist@piefed.worldOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      Great to get the perspective of someone who was in education.

      Still, those students who WANT to learn will not be held back by AI.

      I think that’s a valid point, but I’m afraid that it’s making it harder to choose to learn the “old hard way” and I’d imagine fewer students deciding to make that choice.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      This. (Offline too.)

      Which generation did we really taught critical thinking to? In general, those “thinkers” or people with nice research skills (e.g. reading comprehension and other traits) were always a minority within each generation. And I agree there will be less now with AI. But we have no polls or measurement, so the title goes a little clickbaity, in resonance to the generalized discomfort towards a new technology that schools haven’t accomodated yet (e.g. all kind of solutions are seen in the wild)

      I reckon it was the same with arithmetislcs and calculators in the past. We were able to deal with that! (so that whatever proportion of people that graduates knowing arithmetics with each generation didn’t shrink “too much”.)

      If we are considering possible scenarios, let’s be optimistic too.

      AI (discounting other problems like their ecological footprint) may not be that bad on our educational systems once we adjust…