• 1 Post
  • 576 Comments
Joined 3年前
cake
Cake day: 2023年6月9日

help-circle

  • merc@sh.itjust.workstoPolitical Memes@lemmy.caHurr durr
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    5時間前

    And that was before the Strait of Hormuz situation.

    He can’t convince former allies to help out of goodwill and solidarity because he’s done everything possible to destroy that goodwill since he came into office.

    He also can’t convince them to help out of fear of the consequences either, because he has already imposed those consequences.

    In addition, even if he had a threat or a reward that was meaningful, the one thing that’s clear about Trump is that he never keeps his word. He doesn’t even adhere to treaties that are legally binding. So, if he promised something extremely valuable, he probably wouldn’t deliver. OTOH, if he threatened something extremely dire, he would probably chicken out.


  • Threaten partners for not helping.

    Threaten partners with tariffs. The same thing you’ve been slapping them with for absolutely no reason since you came into office. The same things that these partners now just expect to have to deal with when your mood changes. The same thing that courts have decided are illegal and that you can’t actually enforce.


  • checking the code is much harder than coming up with it yourself

    That’s always been true. But, at least in the past when you were checking the code written by a junior dev, the kinds of mistakes they’d make were easy to spot and easy to predict.

    LLMs are created in such a way that they produce code that genuinely looks perfect at first. It’s stuff that’s designed to blend in and look plausible. In the past you could look at something and say “oh, this is just reversing a linked list”. Now, you have to go through line by line trying to see if the thing that looks 100% plausible actually contains a tiny twist that breaks everything.






  • That’s half of it. The other half is that these execs think that everybody under them is some kind of replaceable cog in the machine with no special skills. They don’t think their job could be replaced by AI. But, they think everyone under them is so unimportant that their job can be done by AI. They’re managers. They don’t know how to do the work of the people they’re managing. They can’t tell the difference between an accurate result given to them by someone with knowledge and expertise vs. one created by a slop machine that generates plausibly realistic text.

    If their $1000/hour lawyers tell them one thing, but the bullshit machine tells them something different, they trust whichever one gives them the answer they prefer.


  • Really, silver shouldn’t even exist until players are level 5-6 or something.

    A copper shouldn’t be thought of as a penny, but I think a lot of people think of it that way. It should be much more like a dollar. A mug of ale is 4 copper pieces. A loaf of bread is 4 copper pieces. A taxi is 1 copper.

    Because D&D is a world without cash registers or price stickers, bargaining should be common. And you’re not going to bargain over the last penny, but maybe over the number of dollars (i.e. coppers).

    I also think 1 silver should be 100 copper. But, you should only start seeing silver once you’re dealing with people who are used to dealing with things costing hundreds of “dollars”. 100 copper would be a pain to manage, so they use silver. A typical adventurer’s pub might only rarely see silver because all their prices are in copper, and there’s nothing even approaching 100 “dollars” on their menu.

    In this system, gold similar to $10,000 per coin. Because of that, the only kinds of stores that might see gold coins are high end magic shops, or shops dealing with upper-level nobles or royalty.

    I also think it’s hard for people to put themselves in a mindset of a “medieval” sort of world. We’re used to a hotel room being hundreds of times the cost of a loaf of bread. That’s a modern thing where both farming and baking are automated. In the past things weren’t nearly that efficient. So, if a poor quality stay in an inn (you’re sharing a bed with other random guests, and there’s a thin mattress) is $100, a loaf of plain bread should be $10.



  • I don’t know what modern military wear is. The badge thing seems strange, but I don’t know if it’s wrong. I can imagine putting plate carriers on the seats to dry them out if it has been hot and sweaty, but I don’t know if that’s actually common. As for the car, I would guess there are places where they might use what’s available locally, and who knows what that would be.

    You have to admit that it’s at least high-quality slop. No extra hands, no extra toes or fingers. No text that is obviously slop.


  • I’ll be honest, the real pictures would fool me. The era where you could tell by the number of fingers or toes is gone, apparently. She does seem to change military branches a bit too often to be realistic. And, her colleagues seem a bit too unbothered by someone taking a picture of her with her shoes and socks off and feet up on a desk in an apparent military setting. But there aren’t any pictures I glance at and think “oh, that’s obviously AI”.


  • LLMs are an obvious dead end when it comes to actual “intelligence” or understanding how the world works.

    But, this sounds like a “draw the rest of the owl” situation.

    “JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail.”

    Oh, it’s that simple is it? Just have it “learn abstract representations of how the world works”. Amazing how nobody thought to do that before!

    I think I understand the distinction they’re trying to draw. Current models are trained on billions of pictures of cats and billions of pictures of dogs. You feed it an image of Fido and it finds a point in 2500 dimensional space and knows whether that point is in the “cat space” or “dog space”. It can be very good, but it doesn’t have any “understanding” of what makes something a cat vs. a dog. Humans, OTOH, aren’t trained on billions of images. But, they learn about things like “teeth” and “whiskers” and “snouts” and “eyes”. Within their knowledge of eyes, they spot that vertical slit pupils are unusual and different, and part of what makes something “catlike”. AFAIK, nobody has ever managed to create a system that learns abstract features without intensive human training.

    I like that they’re trying something new. But, are they counting on a massive breakthrough on a problem that has existed since people first started theorizing about AI? Or, is it just a matter of refining a known process?



  • I have an old YouTube app on my iPad, and it still works fine. One of the more responsive apps on the device. I get nagged nearly every time I use it to update to the newest YouTube release, but that’s impossible. I’d first have to upgrade my OS, and Apple no longer releases new OSes for this generation of iPads. So, I’m stuck with an old YouTube, which mostly works fine, and an occasional nag message.

    I’m sure within a year or two mine will be like yours and YouTube will simply no longer work. But, for now it’s in a relatively good spot where I can use a version of YouTube designed for this particular hardware that doesn’t feel sluggish.


  • You do really feel this when you’re using old hardware.

    I have an iPad that’s maybe a decade old at this point. I’m using it for the exact same things I was a decade ago, except that I can barely use the web browser. I don’t know if it’s the browser or the pages or both, but most web sites are unbearably slow, and some simply don’t work, javascript hangs and some elements simply never load. The device is too old to get OS updates, which means I can’t update some of the apps. But, that’s a good thing because those old apps are still very responsive. The apps I can update are getting slower and slower all the time.