But all the interesting people are in the computer, the same place as the bad stuff is.
But all the interesting people are in the computer, the same place as the bad stuff is.
And the context was a sentence that was correct if you used OED sense 1, or MW sense 1, but you decided to parse it as MW sense 2b and then complain that the sentence was incorrect.
OED:
- totally or partially resistant to a particular infectious disease or pathogen.
- protected or exempt, especially from an obligation or the effects of something.
Merriam Webster
: not susceptible or responsive
especially: having a high degree of resistance to a disease
a: produced by, involved in, or concerned with immunity or an immune response
b: having or producing antibodies or lymphocytes capable of reacting with a specific antigen
a: marked by protection
b: free, exempt
So unless you pretend that MW’s 2b sense is the only valid one, the immunity is immunity.
If you have a sample of HIV at 37°C in blood, but with all the immune cells removed, it’ll still all become inert after around a week simply due to chemical reactions with other components of blood etc… It’s pretty comparable to a population of animals - if you take away their ability to reproduce, they’ll die of old age when left for long enough even if you’re not actively killing them.
Edit: fat-fingered the save button while previewing the formatting


There’s a pretty good reason to think it’s not going to improve much. The size of models and amount of compute and training data required to create them is increasing much faster than their performance is increasing, and they’re already putting serious strain on the world’s ability to build and power computers, and the world’s ability to get human-written text into training sets (hence why so many sites are having to deploy things like Anubis to keep themselves functioning). The levers AI companies have access to are already pulled as far as they can go, and so the slowing of improvement can only increase, and the returns can only diminish faster.
Even if you ignore that there’s an entirely valid sense of the word immune that has nothing do do with biology (i.e. the one in phrases like diplomatic immunity), my original comment is entirely consistent with the dictionary definition of the biological sense of the word. There are probably sub-fields of biology where immunity is used as jargon for something much more specific than the dictionary definition, but this is lemmyshitpost, not a peer-reviewed domain-specific publication.


If LLMs aren’t going to reach a point where they outperform a junior developer who needs too much micromanaging to be a net gain to productivity, then AI’s not going to be a net gain to productivity, and the only productive way to use it is to fight its adoption, much like the only way to productively use keyboards that had a bunch of the letters missing would be to refuse to use them. It’s not worth worrying about obsolescence until such a time as there’s some evidence that they’re likely to be better, just like how it wasn’t worth worrying about obsolescence yet when neural nets were being worked on in the 80s.
When a normal person is exposed to HIV, it reproduces inside of them, so can then go on to expose more people, and if there’s enough of it, infect them in turn (if there’s a smaller amount, their immune system will normally be able to clean it up before it gets enough of a foothold). If someone’s lacking the receptor, then no matter how much they were exposed to, their immune system will eventually manage to remove it all without becoming infected because it can’t reproduce. If they had a ludicrously large viral load, then there’s a possibility that it could be passed on before it was destroyed, but most of the ways people get exposed to HIV aren’t enough to infect someone who’s vulnerable, let alone infect someone else via secondary exposure if there’s not been time for the infection to grow.


Usually, having to wrangle a junior developer takes a senior more time than doing the junior’s job themselves. The problem grows the more juniors they’re responsible for, so having LLMs stimulate a fleet of junior developers will be a massive time sink and not faster than doing everything themselves. With real juniors, though, this can still be worthwhile, as eventually they’ll learn, and then require much less supervision and become a net positive. LLMs do not learn once they’re deployed, though, so the only way they get better is if a cleverer model is created that can stimulate a mid-level developer, and so far, the diminishing returns of progressively larger and larger models makes it seem pretty likely that something based on LLMs won’t be enough.
People without the receptor that HIV targets are immune to HIV because of that, like how a rock is immune to verbal abuse or double foot amputees are immune to ingrown toenails. The immune system being able to kill something isn’t the only way things can be immune to other things.
That tests the AIDS immunity, but not whether there are off-target edits. IIRC, the mothers were all HIV-positive, so the children are all pretty likely to be exposed anyway, which was part of how he justified the experiment to himself.
If he got incredibly lucky, they’re immune to AIDS. It’s much more likely that they’re not and will develop symptoms of new and exciting genetic disorders never seen before.
The biggest problem was that the technique used is really unreliable, so you’d expect off-target edits to be more common than on-target ones for a human-sized genome. For bacteria, you can get around it by letting the modified bacteria reproduce for a few generations, then testing most of them. If they’re all good, then it worked, and if any aren’t, you need to make a new batch. Testing DNA destroys the cells you’re testing, so if you test enough cells in a human embryo to be sure that the edits worked, it dies. You can’t just start when the embryo is a single cell to ensure that the whole thing’s been edited in the same way as you need to test something pre-edit to be able to detect off-target edits.


To be fair, if I had all that money, I’d probably just pay someone to figure out how to make it do the most good, and continue spending at least some of my time shitposting. It’s okay to have hobbies, but it’s bad to hoard the money or invest it in evil.
It’s pretty easy to put something on the box like this can make your phone buzz if you forget to brush your teeth, and people who worry they’re sometimes forgetting to brush your teeth will see that as an advantage without necessarily realising that they need to give the manufacturer their email and the right to associate it with their brushing telemetry.
There are a far fewer pedestrians and walls and lamp posts and motorcycles in the air than on the ground, though, so there’s a lot more margin to be awful without endangering anyone other than your own family.


CUDA is an Nvidia technology and they’ve gone out of their way to make it difficult for a competitor to come up with a compatible implementation. With cross-vendor alternatives like OpenCL and compute shaders, they’ve not put resources into achieving performance parity, so if you write something in both CUDA and OpenCL, and run them both on an Nvidia card, the CUDA-based implementation will go way faster. Most projects prioritise the need to go fast above the need to work on hardware from more than one vendor. Fifteen years ago, an OpenCL-based compute application would run faster on an AMD card than a CUDA-based one would run on an Nvidia card, even if the Nvidia card was a chunk faster in gaming, so it’s not that CUDA’s inherently loads faster. That didn’t give AMD a huge advantage in market share as not very much was going on that cared significantly about GPU compute.
Also, Nvidia have put a lot of resources over the last fifteen years into adding CUDA support to other people’s projects, so when things did start springing up that needed GPU compute, a lot of them already worked on Nvidia cards.


Generally, you’ll get better results by spending half as much on GPUs twice as often. Games generally aren’t made expecting all their players to have a current-gen top-of-the-line card, so you don’t benefit much from having a top-of-the-line card at first, and then a couple of generations later, usually there’s a card that outperforms the previous top-of-the-line card that costs half as much as it did, so you end up with a better card in the long run.
I’ve found this is really dependent on placement. If I put my libre a couple of centimeters away from the region I usually use, it’ll read low all night, but as long as I stick to the zone I’ve determined to be fine, it’ll agree with a blood test even if I’ve had pressure on it for ages. Also, the 3 is more forgiving than the 1 or 2 because it’s smaller than the older models, so affects how much the skin bends and squishes less.


Plenty of TVs are capable of radioing your neighbour’s TV and piggybacking off their internet connection, so if it’s not in a Faraday cage, it might be overconfident to say it’s never been connected to a network.
apt was mentioned, so this might actually be Debian’s problem. Python doesn’t support being installed without its standard library, but (unless they’ve decided to stop being dumb since I last checked) Debian’s python package only contains part of the standard library, and the rest is split into other optional packages. If you find software that says its only dependency is python, on Debian-derived distros, it might not work without installing extra packages, and if the software’s maintainer doesn’t use Debian and know about this, then their installation instructions won’t cover it.