Solution:
Warn the students throughout the year “If you use AI all year for your schoolwork, you will fail the exams every month or two”. Teach AI literacy, how it works, and the dangers of not learning the material and how it’s a snowball effect. Teach the “why” it’s important to use your brain to solve the problem, and state that they should be using their own words/math/etc. Students suspected of using AI should have a note sent to their parents early on to hopefully correct their path early.
Exams are paper and pencil only. No computers. No phones. No smart glasses. When students are getting an A on their homework and flunking their exams, it’ll be pretty obvious. Even a student who has anxiety about exams can get a C.
My wife is a teacher, she has shown me vibed handed in assignments abd its incredibly obvious.
Right off the bat, if she gives an assignment to make, say, a slideshow on “Topic” and they talk about a examples A, B, and C in class, and the assignment goes off on tangents about topics F, G, and H instead, it’s an instant red flag.
This happens cuz the student just copy paste the assignment blurb into gpt, but gpt has no context for what was discussed in class… so it goes off the rails instantly.
Its also easy to include poison pills in the middle of an assignment if they copy paste it straight into gpt.
Also theres all the usual markers. Emoji, em dash, and the assignment having way higher verbosity than you know damn well the kid has the vocabulary for. Suddenly they’re speaking at a grade 7~8 levels higher than usual? Uh huh. .
From her and her teacher friends, Ive been told its extremely obvious to spot still. And its pretty trivial to setup the assignment to poison pill the AI.
What if the kid lies and says they didn’t use AI? How successful have they been in convincing admin and parents of their ai usage? While I agree its all damning, its still circumstantial evidence.
Back in the day just one instance of plagerism was very serious. If you got caught doing it more than once you could get expelled.
Now apparently everyone is using the plagerism machine including the professors. So much for academic integrity.
I had a sustainability class where the professor used AI to write the course syllabus, assignments, and feedback. a fucking sustainability class.
I contacted the office of the president about it at my university but nothing ever happened of it. academia in general has gone off the rails with AI recently. I used to assume those with doctorates we’re bright enough to avoid AI but evidently that’s not the case.
Never underestimate how fucking lazy humans can be.
I don’t teach kids, so I don’t know the answer to this, but I imagine what you’d do is add guidelines to the assignment that cause them to either lose significant points or fail if they don’t specifically mention things discussed in assignments and the classroom.
I’d also like to point out that, yes, we know when kids and adults lazily insert a prompt and lazily paste its response, but anybody with half a brain knows they only need to spend an extra 15 minutes re-prompting and editing it to make it nearly unnoticeable.
The answer is probably to test them in person with no computer of any kind in front of them.
Amateurs, everyone knows you record classroom discussion, translate it to text, and then feed it into the LLM for context.
I was unable to get Mistral’s AI to output an emoji recently. They forbid it in the system prompt and it wouldn’t give one out for a pretend life or death situation.
Ah. Good old days in the 80’s when teacher didn’t even read what you wrote. Grade was given according who you were, did the teacher like you and what your previous grades were. No sudden inspirations to do better.
Replace “teacher” with “management” and you’re describing every workplace.
This happened to me. I was a pretty good kid; brainy too, but one history test kicked my ass. The teacher was the husband of my grandfather’s sister. A distant family cut me a break.
This phenomenon can continue into adulthood too.
However, after a certain point it’s not about the grades you make. It’s about the hands you shake.
Sure as hell ain’t my students; it’s been a steady decline since ChatGPT came out and I think I may have failed more students than ever over unfinished projects. You can’t GPT the semester long project, there’s a paper trail and data to collect can it becomes super clear who is AI brained now…
Edit: PS, grade inflation has been a thing for a few decades now, btw; the As aren’t the problem so much as the mush brain.
You are unfair, you don’t let them use ChatGPT during tests 😉
They’re not grading ChatGPT performance either
Isn’t that how it is suppose to work?
At the end of class you get a grade.
‘A’ grades are suddenly everywhere, as introduction of stochastic parrot as a service reveals the education system is geared towards training parrots instead of teaching humans. What a surprise.
Education needs to change. Including punishment for using LLMs.
I dunno, they’re here to stay. Cat’s out of the box. Educators and education need to adapt. In person assessment is probably the ideal way to gauge progress and learning, but due to resources I don’t see it being practical.
Except the whole point of education is to LEARN how to do it without these tools. If you’re just turning your brain off and handing in the output, you are literally missing the point.
It’s like using calculators on steroids. There are times to use calculators and times to force mental math. You can teach kids AI literacy and usage habits, but letting them just use no thinking makes the entire exercise pointless. We might as well close schools, because having the AI generated your math homework or essay is fucking pointless.
Cat’s out of the box.
The monkey’s out of the bottle.
Deserved. 🤣
they’re here to stay. Cat’s out of the box.
People keep saying this as though it’s true. The odds that this current era of free and ubiquitous access to these frontier LLMs lasts forever are pretty slim.
How do you figure? There are open source self host able solutions right now.
Already, very few middle schoolers have the tech savvy to self-host anything. If it’s not a tablet, they have trouble using it.
Add to that the possibility that the data center run on memory and processors could mean that local computing power will disappear, to be replaced with devices like Chromebooks that use corporate cloud services for everything.
You can’t run anything like a frontier model on a self hosted solution. To get anywhere close you’d have to spend thousands of dollars on hardware which obviously isn’t free, or even a viable solution for the vast majority of people, let alone these students. And the quality of output you’d get from a model running on off the shelf consumer hardware like a MacBook is much more noticeably AI generated and trivial for AI detection tools to flag.
Before you can punish for using LLMs, you need to be able to reliably detect the use of LLMs, including guarding against false positives.
Current AI checkers are woefully inadequate and prone to errors.
A teacher I know says it is easy to determine if a student wrote their paper if you interview them about it. You’re right that automated methods are risky.
That’s it. As a teacher who has been dealing with this in the last 2-3 years, the only reliable way I have found is to do short interviews.
Students hand in their work, I grade it, then I ask them verbally a few easy questions about what they mean in specific sections of their work. How they score on these questions is used as a coefficient that I apply on the grade to get the final score.
So they can use LLMs, but they have to understand its output.
Before you can punish for using LLMs, you need to be able to reliably detect the use of LLMs, including guarding against false positives.
You can tell they’re using an LLM if they have a computer out during the pen-and-paper test.
How is that allowed?
Hell, back in my day, teachers were even very picky about what kind of calculator you could use. And if it was a graphing calculator, you had to show them yourself wiping the memory at the beginning of the test.
(Except for one algerbra teacher, who was really cool about it. He’d allow custom programs to stay on the calculator if you programmed it yourself. On the theory that if you can write a computer program that reliably solves these math problems, then you must have a very good understanding of how to solve these math problems. And, yes, I was one of the few kids who actually did that. Ah, writing my own custom software for the TI-83 on the TI-83, because that seemed easier than actually doing the math problems by hand … good times.)
Not US, but there’s a tendency of focusing more on the work during the semester than in the exam itself
LLMs are going to be a massive headache for me when they get older
We are allowing LLMs for all of our homeworks. As long as you can solve the problems in the indicated way with a reasonable answer.
In case you are not sure about the “indicated way”, there are practice questions with detailed step-by-step solutions for each hw problem that you just have to change the numbers/equations a bit and you’ll get points.
What we’ve noticed is that the year-after-year averages are significantly higher, especially this year. However, students are bringing in details that we explicitly didn’t go over in lecture and putting that on the homework (e.g. Delayed branching in Computer Architecture, because it’s a random quirk of MIPS that even assembly programmers don’t have to deal with). None of these details are ever mentioned in lecture or the practice homeworks (in a few cases, they are mentioned with the explicit wording “do not worry about this now”)
We can only assume people are copying the homework into LLMs and copying the results straight down. The latest exam had a question where students were asked to analyze a specific chunk of assembly code to deduce certain properties about it. Approximately 20-30% of the students didn’t know the FORMAT to answer it, despite it literally being item 1 on last week’s homework.
And when I say format, I don’t mean exactly “you must write these exact words or you lose points”. It’s literally just point out “line A and B have this property X because of attribute Y”. Just including ABXY as shown in the practice homework is enough. But apparently people are too lazy to read a 10 bullet point answer…
Then those people will go and fucking vote. Fucking hell
Let’s be honest, with their attendance rate in class, I don’t think these students actually vote…
and they’ll be assured they’re deserving of a tech sector job while everyone else is already losing their everything.
Then why issue homework at all?
Because the goal is to get people to learn/think about something. We don’t care what you use as long as you retain knowledge taught in the course. If what helps you learn is LLMs, then go for it.
Problem right now is there is a significant amount of people that are using these tools to do the thinking for them. And this is when Office Hours, Homework feedback, Email (I guarantee all students emails are responded to within 24hrs. Most are handled within 30 minutes) are all available and paid for (by tuition). I am even happy to schedule one-on-ones if privacy is a concern, but none of this is being utilized.
As someone who works in ed tech these days, I’m kind of down for them as a study tool. For example, synthesizing notes and turning them into flashcards, practice tests, etc. I find that stuff to be suuuper handy if I’m trying to learn something.
But for cheating, yah, fuck that noise. A lot of classes are moving back to pencil and paper because of this, and I totally support that.
I feel like synthesising notes and turning them into flash cards how i learn things.
Exactly. Taking notes in class during a lecture. Copying something the instructor wrote on the board. This is all part of the learning process. The act of doing these things helps you learn.
The only skills or learnings I really seem to have retained from University are the ability to collect, and collate information and then apply it to a problem. The actual information collected and problems solved are lost to me now.
The good students are still getting A grades naturally. And the bad students are getting A grades with ChatGPT. A grades for everybody! (Until we get to the closed-book, in-person test at the end of the year…)
Surely this won’t cause any problems at all
Shouldn’t the education system change then? If it is easy to get an A with a machine, should we then not focus on learning something that can’t be done by a machine. I mean, it has value to know things and know what to do insituations where AI is not available.
so your answer is to regulate education? one of the most heavily regulated industries, education, you want to further regulate because of the influence of a brand new unregulated industry.

Are you suggesting that kids should stop learning basic arithmetic?









