• 11 Posts
  • 356 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle



  • Dave@lemmy.nztoLemmy Shitpost@lemmy.worldexam cheating
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 days ago

    I guess I don’t really understand where they might fit in to an emergency room scenario. My experience with Nurse Practitioners is as a person that can take on some basic GP tasks to lighten the load.

    For example, on of my kids has asthma and uses a regular inhaler. Instead of taking doctors time, they can book you in with the Nurse Practitioner to get a new prescription. That makes sense to me.

    I do see that here, Nurse Practitioners are given a much wider scope including being able to assess test results and make a diagnosis, though I didn’t exactly read all the material thoroughly (heaps on info in the downloads on the right on this page, in case someone reading is interested).

    It does say they need 4 years of experience and 300 hours of clinical learning (from what I can tell, they decide they want to be a Nurse Practitioner and they enter a programme of focused learning in a clinical setting). This seems at least more than what is required where the other user lives so I feel a little better, even if 300 hours is only like 7 weeks, at least they need that 4 years of experience 😅





  • Dave@lemmy.nztoLemmy Shitpost@lemmy.worldexam cheating
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 days ago

    I think we might overestimate how qualified a junior doctor is after doing all the exams. This article (from 2009, well before LLMs) says junior doctors screw up in 8% of prescriptions they write, with half of the mistakes “potentially significant”. This is after any chance at having a supervising doctor review. It says pharmacists generally save the day by spotting the errors.

    I also found local numbers showing about 16% of junior doctors never make it through training (the article is saying it’s actually 40%, but 16% is their “normal”). That will include burnout and other reasons for not continuing, but I’m pretty sure with such a decent proportion of people dropping out you can expect the ones that haven’t taken in enough understanding despite passing their exams are commonly dropping out as part of that group, and though LLMs may have increased the pool I doubt we can assume these people make it through training without learning what they need to know. Becoming a doctor is just so intense that it doesn’t seem likely.

    As has been pointed out by someone else, our concern should probably lie in those that pass exams then go on to do medical (or other) roles without any supervision period.


  • I don’t think I’ve ever been to a Nurse Practitioner without knowing exactly what the outcome would be, and realistically that does take a lot of burden off doctors so long as they correctly recognise what they should and shouldn’t do.

    I expect that rules will catch up with the existence of LLMs, the problem is for those few generations that have to live through the transition period…





  • Dave@lemmy.nztoLemmy Shitpost@lemmy.worldexam cheating
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    5 days ago

    Doctors spend months or years being supervised. If a doctor cheated on one test then maybe it would slip through, but I see this as no different to just forgetting some part of some learning from years ago, which surely happens.

    If a doctor cheated on every exam, their supervisor is going to notice really quickly.


  • Dave@lemmy.nztoLemmy Shitpost@lemmy.worldexam cheating
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    5 days ago

    Ah this is a different risk than I thought was being implied.

    This is saying if a doctor relies on AI to assess results, they lose their skill in finding them by themselves.

    Honestly this could go either way. Maybe it’s bad, but if machine learning can outperform doctors, then it could just be a “you won’t be carrying a calculator around with you your whole life” type situation.

    ETA: there’s a book Noise: A flaw in human judgement, that details how whenever you have human judgement you have a wide range of results for the same thing, and generally this is bad. If machine learning is more consistent, the standard of care is likely to rise on average.