

Also tab to autocomplete.
The command line looks like a lot of typing, but with ctrl+r and tab I barely type anything.


Also tab to autocomplete.
The command line looks like a lot of typing, but with ctrl+r and tab I barely type anything.
As someone who finds puddles on the floor, bin, and spray up the wall: pretty sure you learn through trial and error.
I guess I don’t really understand where they might fit in to an emergency room scenario. My experience with Nurse Practitioners is as a person that can take on some basic GP tasks to lighten the load.
For example, on of my kids has asthma and uses a regular inhaler. Instead of taking doctors time, they can book you in with the Nurse Practitioner to get a new prescription. That makes sense to me.
I do see that here, Nurse Practitioners are given a much wider scope including being able to assess test results and make a diagnosis, though I didn’t exactly read all the material thoroughly (heaps on info in the downloads on the right on this page, in case someone reading is interested).
It does say they need 4 years of experience and 300 hours of clinical learning (from what I can tell, they decide they want to be a Nurse Practitioner and they enter a programme of focused learning in a clinical setting). This seems at least more than what is required where the other user lives so I feel a little better, even if 300 hours is only like 7 weeks, at least they need that 4 years of experience 😅
Ooh I think this is better:

Honestly, he looks a lot like Cage!
I think we might overestimate how qualified a junior doctor is after doing all the exams. This article (from 2009, well before LLMs) says junior doctors screw up in 8% of prescriptions they write, with half of the mistakes “potentially significant”. This is after any chance at having a supervising doctor review. It says pharmacists generally save the day by spotting the errors.
I also found local numbers showing about 16% of junior doctors never make it through training (the article is saying it’s actually 40%, but 16% is their “normal”). That will include burnout and other reasons for not continuing, but I’m pretty sure with such a decent proportion of people dropping out you can expect the ones that haven’t taken in enough understanding despite passing their exams are commonly dropping out as part of that group, and though LLMs may have increased the pool I doubt we can assume these people make it through training without learning what they need to know. Becoming a doctor is just so intense that it doesn’t seem likely.
As has been pointed out by someone else, our concern should probably lie in those that pass exams then go on to do medical (or other) roles without any supervision period.
I don’t think I’ve ever been to a Nurse Practitioner without knowing exactly what the outcome would be, and realistically that does take a lot of burden off doctors so long as they correctly recognise what they should and shouldn’t do.
I expect that rules will catch up with the existence of LLMs, the problem is for those few generations that have to live through the transition period…
Oh great. Just what I wanted to hear.
These Nurse Practitioners are presumably already required to be highly skilled nurses? Please tell me that’s true 😑
It’s not but the linked paper I responded to doesn’t mention LLMs?
Doctors spend months or years being supervised. If a doctor cheated on one test then maybe it would slip through, but I see this as no different to just forgetting some part of some learning from years ago, which surely happens.
If a doctor cheated on every exam, their supervisor is going to notice really quickly.
Ah this is a different risk than I thought was being implied.
This is saying if a doctor relies on AI to assess results, they lose their skill in finding them by themselves.
Honestly this could go either way. Maybe it’s bad, but if machine learning can outperform doctors, then it could just be a “you won’t be carrying a calculator around with you your whole life” type situation.
ETA: there’s a book Noise: A flaw in human judgement, that details how whenever you have human judgement you have a wide range of results for the same thing, and generally this is bad. If machine learning is more consistent, the standard of care is likely to rise on average.
I find it unlikely that this will happen for roles like doctors and nurses. There are large practical components of training, if they didn’t have the basic knowledge needed it would show through pretty quickly.


I do nightly borg backups of much more than 200gb. The idea of incremental backups is you’re only doing the changes, and photos don’t tend to change.
What challenge did you come across with a 200GB backup?
Well surely vi could be improved, otherwise we wouldn’t have vim?


Looks like it’s one of those sites that let you easily host an instance of various sites, one of which includes Lemmy.
I’m not sure who will be affected but if it’s anyone, probably mostly single user instances.
Could be anything from Stardew Valley to Baldur’s Gate 3.
The first line says the boss lives across the street
PC - fans blasting, groaning under the workload
Me - not even using it
Checks processes, steam web helper using 100% of CPU.