Lvxferre [he/him]

I have two chimps within, Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the face of anyone who gets close to either.

They also devour my dreams.

  • 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle
  • Yeah, the terminology is currently a mess. Not just due to language changes, but also synchronic variation - different people using the same words for different meanings, at the same time. But for me, it’s a mix of motivations, methods, and morality:

    • hacker strictu sensu - like a kid who dismantles toys to see how they work. Sometimes they break things, but they want knowledge the most. Usually grey hat, sometimes white hat, only rarely black hat
    • cracker - like a kid who bashes toys with a hammer. Not interested on the knowledge itself, except when it allows them to bully other kids. Almost always black hat.



  • My guess:

    Coverage roughly follows money, and that money comes the top of the hierarchy. However, the top is too far from the production to actually get that 1) automation is nothing new, and 2) AI won’t help as much with it as advertised.

    The middle of the hierarchy is close enough to the production to know those two things, but it’ll parrot them because doing so enables the inefficiency they love so much, under the disguise of efficiency.

    Then you got the bottom. It’s the closest to the production, but often suffers from a problem of “I don’t see the forest, I see the leaves”, plus since it has no decision power so it ends as a “meh who cares”. So it’ll parrot whatever it sees in the coverage.

    As such, who’s actually going to get screwed here? The answer may surprise you.

    All three. However not in the way people predict, “AI is going to steal our jobs”. It’s more like suckers at the top will lose big money on AI fluff, and to cut costs off they’ll fire a lot of people.

    Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

    Ditto.



  • More like an Autumn/Spring thing than a Summer one, but…

    If you live in a place where temperature varies a lot across the day, you’ll want to wear a jacket at some hours, but not others. Then you need to choose between three options:

    1. Wear the jacket through the day. You’ll feel hot at ~noon.
    2. Don’t bother with the jacket. You’ll feel cold at ~early morning and ~early night.
    3. Take off the jacket at ~noon. Now you’re carrying (and potentially forgetting) yet another item.

    All three suck. But people disagree which one sucks the least, and for some it’s #1. So you get people wearing jackets even when it’s too hot for that.




  • Yes, this should be illegal, but it’s already common practice. I’m just hoping that enough of this will eventually get people to stop buying these products, and hopefully we can start seeing some real legislation against it in some countries.

    Problem is, people won’t stop buying them. Often “smart” products are sold comparatively cheaper, because the business expects additional profits through ads; and if Samsung is going this way (ads on your fridge), it’ll do it.

    The “crackers” part of this confuses me. Samsung is a Korean company. The chairman’s name is Lee Jae-yong (이재용). Samsung NA’s CEO is Yoonie Joung. Maybe I’m misreading this?

    By “crackers” I mean “black hat hackers”. The sort of people who’d love to drop some ransomware into your fridge and then say “if you don’t want me to brick your fridge, pay me a few bucks”.

    (After some websearch, apparently Americans use it as a derogatory term. I wasn’t aware of that.)


  • However, Samsung is giving users the option to turn off ads.

    For now, like the author herself mentions later on (“The bigger issue is that of trust. […] that’s today.”)

    [Higby] “This pilot further explores how a connected appliance can deliver genuinely useful, contextual information. The refrigerator is already a daily hub, and we’re testing a responsible, user-controlled way to make that space more helpful.”

    What Shane Higby is saying here boils down to “we’re trying to help the user”. But if he said so, in clear words, every bloody body would call it bullshit, because it’s common knowledge companies smear ads on your face for their own sake - not yours. But if you hide it behind fancy words, like “further explores” and “deliver” and the likes, it’s harder to call the bullshit.

    I’m getting real tired of this shit.

    [Higby] "…future promotions will depend on the feedback and insights gained from the program.”

    Translation: “we’re just testing the waters now. Let’s see if the suckers swallow it or spit it.”

    This is similar to the justification Panos Panay, Amazon’s […] He said it was looking to be “elegantly elevating the information that a customer needs.”

    Emphasis mine. You can always trust Amazon in one thing: belittling the user.

    The problem here isn’t just the ads themselves (although they are a problem); it’s that they are being added to the device after it’s in my home.

    [Warning, IANAL.] Fight this shit. Seriously, fight it. On legal grounds. What they’re doing should be outright illegal in most countries; it’s equivalent to changing a contract unilaterally after both parties signed it.

    Additionally, I’d strongly advise against buying any sort of “smart” device, unless you’re pretty sure the benefits of connecting your toaster to the internet outweighs all the risks. Including corporations and crackers taking control of it, harvesting your data, spamming you, building kill switches into it, etc.



  • 2:10 “I assumed that, if I couldn’t beat the system, there was no point on whatever I was doing”: that’s the old nirvana fallacy. The rest of the video is about dismantling it for the individual, and boils down to identifying who you’re trying to protect yourself against (threat model), compromising, etc.

    It’s relevant to note that each tiny bit of privacy that you can get against a certain threat helps - specially if it’s big tech, as the video maker focuses on. It gives big tech less room to manipulate you, and black hats less info to haunt you after you read that corporate apology saying “We are sorry. We take user safety seriously. Today we had a breach […]”.

    And on a social level, every single small action towards privacy that you do:

    • makes obtaining personal data slightly more expensive thus slightly less attractive
    • supports a tiny bit more alternatives that respect your privacy
    • normalises seeking privacy a tiny bit more

    and so goes on. Seeking your own privacy helps to build a slightly more private world for you and for the others, even if you don’t get the full package.









  • Trying to automate things and decrease mod burden is great, so I don’t oppose OP’s idea on general grounds. My issues are with two specific points:

    • Punish content authors or take action on content via word blacklist/regex
    • Ban members of communities by their usernames/bios via word blacklist or regex
    1. Automated systems don’t understand what people say within a context. As such, it’s unjust and abusive to use them to punish people based on what they say.
    2. This sort of automated system is extra easy to circumvent for malicious actors, specially since they need to be tuned in a way that lowers the amount of false positives (unjust bans) and this leads to a higher amount of false negatives (crap going past the radar).
    3. Something that I’ve seen over and over in Reddit, that mods here will likely do in a similar way, is to shift the blame to automod. “NOOOO, I’m not unjust. I didn’t ban you incorrectly! It was automod lol lmao”

    Instead of those two I think that a better use of regex would be an automated reporting system, bringing potentially problematic users/pieces of content to the attention of human mods.