There are a far fewer pedestrians and walls and lamp posts and motorcycles in the air than on the ground, though, so there’s a lot more margin to be awful without endangering anyone other than your own family.
There are a far fewer pedestrians and walls and lamp posts and motorcycles in the air than on the ground, though, so there’s a lot more margin to be awful without endangering anyone other than your own family.


CUDA is an Nvidia technology and they’ve gone out of their way to make it difficult for a competitor to come up with a compatible implementation. With cross-vendor alternatives like OpenCL and compute shaders, they’ve not put resources into achieving performance parity, so if you write something in both CUDA and OpenCL, and run them both on an Nvidia card, the CUDA-based implementation will go way faster. Most projects prioritise the need to go fast above the need to work on hardware from more than one vendor. Fifteen years ago, an OpenCL-based compute application would run faster on an AMD card than a CUDA-based one would run on an Nvidia card, even if the Nvidia card was a chunk faster in gaming, so it’s not that CUDA’s inherently loads faster. That didn’t give AMD a huge advantage in market share as not very much was going on that cared significantly about GPU compute.
Also, Nvidia have put a lot of resources over the last fifteen years into adding CUDA support to other people’s projects, so when things did start springing up that needed GPU compute, a lot of them already worked on Nvidia cards.


Generally, you’ll get better results by spending half as much on GPUs twice as often. Games generally aren’t made expecting all their players to have a current-gen top-of-the-line card, so you don’t benefit much from having a top-of-the-line card at first, and then a couple of generations later, usually there’s a card that outperforms the previous top-of-the-line card that costs half as much as it did, so you end up with a better card in the long run.
I’ve found this is really dependent on placement. If I put my libre a couple of centimeters away from the region I usually use, it’ll read low all night, but as long as I stick to the zone I’ve determined to be fine, it’ll agree with a blood test even if I’ve had pressure on it for ages. Also, the 3 is more forgiving than the 1 or 2 because it’s smaller than the older models, so affects how much the skin bends and squishes less.


Plenty of TVs are capable of radioing your neighbour’s TV and piggybacking off their internet connection, so if it’s not in a Faraday cage, it might be overconfident to say it’s never been connected to a network.


It wouldn’t be Rocko’s Baselisk if it didn’t do things that hurt.


What kind of time frame were they testing over? Not seeing any significant burn in means something completely different if they’re testing for one year versus ten, especially for people who don’t like replacing things that aren’t totally unusable yet.
It’s rare that English children who learn Spanish as the first foreign language that they’re exposed to. If their parents are immigrants, then it’ll likely be their parents’ mother tongue(s), and if they’re not, they’ll likely be taught some French before any Spanish. That can then lead to a habit of saying any foreign word with a French accent.
Also, England has strong regional variations in accent, so you might be hearing people say exactly the same vowel sounds as they’d use when speaking English, but those vowel sounds might be totally different to how you’re expecting that they’d speak English.


Obviously, most people don’t replace their TV every year, so it was years after new sales were mostly LCDs that most people had LCDs, but companies making content like to be sure it looks good with the latest screens.


I think you might have misjudged when LCDs became common as by the end of 2004, when Halo 2 released, LCD TVs were already a reasonable fraction of new TV sales, and in parts of the world, it was only a few months later that LCD TVs became the majority. For PC monitors, the switch came earlier, so it was clear CRTs were on the way out while the game was being developed. If they hadn’t expected a significant number of players to use an LCD and tweaked the game as much as necessary to ensure that was fine, it would have been foolish


That’s what’s keeping the lights on. If they sunk the extra billions into making their discrete cards genuinely superior to Nvidia’s (which already means taking it for granted that selling comparable products for less money makes them knockoff rather than superior), then Nvidia could stop them recouping the development costs by eating into their own margins to drop their prices. Over the last decade or two, ATi/AMD’s big gambles have mostly not paid off, whereas Nvidia’s have, so AMD can’t afford to take big risks, and the semi-custom part of the business is huge long-term orders that mean guaranteed profit.


There have been times it’s been used against a whole carful of people, and cars are bigger than seven inches.


Even in 2000, I feel like they should have been able to compromise, e.g. by doing his sclera and covering up breathing holes on the computer, but still having the rest be makeup, prosthetics and a costume.


They didn’t end up building a utopia, so must have made some kind of a mistake along the way.
ECC genuinely is the only check against memory bitflips in a typical system. Obviously, there’s other stuff that gets used in safety-critical or radiation-hardened systems, but those aren’t typical. Most software is written assuming that memory errors never happen, and checksumming is only used when there’s a network transfer or, less commonly, when data’s at rest on a hard drive or SSD for a long time (but most people are still running a filesystem with no redundancy beyond journaling, which is really meant for things like unexpected power loss).
There are things that mitigate the impact of memory errors on devices that can’t detect and correct them, but they’re not redundancies. They don’t keep everything working when a failure happens, instead just isolating a problem to a single process so you don’t lose unsaved work in other applications etc… The main things they’re designed to protect against are software bugs and malicious actors, not memory errors, it just happens to be the case that they work on other things, too.
Also, it looks like some of the confusion is because of a typo in my original comment where I said unrecoverable instead of recoverable. The figures that are around 10% per year are in the CE column, which is the correctable errors, i.e. a single bit that ECC puts right. The figures for unrecoverable/uncorrectable errors are in the UE column, and they’re around 1%. It’s therefore the 10% figure that’s relevant to consumer devices without ECC, with no need to extrapolate how many single bit flips would need to happen to cause 10% of machines to experience double bit flips.
It wasn’t originally my claim - I replied to your comment as I was scrolling past because it had a pair of sentences that seemed dodgy, so I clicked the link it cited as a source, and replied when the link didn’t support the claim.
Specifically, I’m referring to
A single bit flipped by a gamma ray will not cause any sort of issue in any modern computer. I cannot overstate how often this and other memory errors happen.
This just isn’t correct:
That study doesn’t seem to support the point you’re trying to use it to support. First it’s talking about machines with error correcting RAM, which most consumer devices don’t have. The whole point of error correcting RAM is that it tolerates a single bit flip in a memory cell and can detect a second one and, e.g. trigger a shutdown rather than the computer just doing what the now-incorrect value tells it to (which might be crashing, might be emitting an incorrect result, or might be something benign). Consumer devices don’t have this protection (until DDR5, which can fix a single bit flip, but won’t detect a second, so it can still trigger misbehaviour). Also, the data in the tables gives figures around 10% for the chance of an individual device experiencing an unrecoverable error per year, which isn’t really that often, especially given that most software is buggy enough that you’d be lucky to use it for a year with only a 10% chance of it doing something wrong.


The press widely covered AV as if it was incredibly expensive and didn’t solve any problems, so presented it as if we’d be throwing away beds at children’s hospitals, support for pensioners and equipment for soldiers just to introduce pointless bureaucracy. If the choice was the one most voters thought they were making, then voting against it would have been the sensible option.
Some of the charity is self-serving, e.g. eradicating diseases means he’s less likely to catch them (and really any billionaire not funnelling funds to pandemic prevention etc. is being moronic), and founding charter schools on land he owns so over the life of the school they pay more in rent for the lease than they cost to build is just a tax dodge. Most billionaires are just so evil that they won’t spend money on themselves if other people who aren’t paying also benefit, so in comparison, Gates’ better ability to judge what’s in his interests makes him look good.
It’s pretty easy to put something on the box like this can make your phone buzz if you forget to brush your teeth, and people who worry they’re sometimes forgetting to brush your teeth will see that as an advantage without necessarily realising that they need to give the manufacturer their email and the right to associate it with their brushing telemetry.