

AI witch hunt strikes again


AI witch hunt strikes again
I’ve read that there are more effective ways to deanonymize tor traffic that goes through exit nodes, as opposed to accessing onion services which is more secure
They targeted gamers.
Gamers.


I don’t hate this article, but I’d rather have read a blog post grounded in the author’s personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there’s this attempt to make it sound like there’s a lot of objective certainty to it that falls flat because of failing to draw a strong connection.
Like this part:
Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.
While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.
So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.
Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that “infinite memory” features of personal AI assistants has on people, just rhetorical speculation.
Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it’s going for the people that didn’t, and who use it for stuff like the given example of picking a restaurant to eat at.


There’s at least some difference between “have been” and “this is currently likely to happen”, since if it’s known then it would have been fixed. I’ve gotten viruses before from just visiting websites but it was decades ago and there’s no way the same method would work now.


Nice to see someone actually trying it themselves to do their own analysis despite having reservations
I thought that part was great, captured the vibe of that sort of dream and foreshadowed that character later developing a mental model of what was happening to the victims. Seemed like a realistic depiction of how dreams can be involved with how we process things where it seems like mysterious nonsense in the moment but comes together later.


Stuff like this makes me wonder, at what point is it bad enough that the truisms about leaving medical advice to licensed healthcare professionals become wrong, and everyone would be better off turning to anything else instead of engaging with the system? Are we not there yet? How much further would there be to go?


A bundler, a transpiler, a runtime (designed to be a drop-in replacement for Node.js), test runner, and a package manager - all in one.
Bun’s single-file executables turned out to be perfect for distributing CLI tools. You can compile any JavaScript project into a self-contained binary—runs anywhere, even if the user doesn’t have Bun or Node installed. Works with native addons. Fast startup. Easy to distribute.


Sounds like an additional reason to be doing it in a way where participants can’t be debanked by payments middlemen


Part of the headache here is that this situation inherently props up a few monopolistic platforms, rather than allowing people to use whatever payment system is available in their own countries. Some of this can be worked around using cryptocurrencies – famously, the Mitra project leverages Monero for this very purpose, although I’m told it now can accept other forms of payment as well.
Hell yeah, I didn’t know about Mitra. It sounds like it’s a Patreon esque kind of deal with what the payments part is for.
Well, at least the advertising companies will lose money this way
Quickly and effortlessly get some music playing that can act as a backdrop for your real activity such as working, driving, cooking, hosting friends, etc. Keep it rolling indefinitely.
“Discover” new music by statistical means based on your average tastes.
This is the main thing I want out of music software tbh.


I feel bad that you think that’s what I’m getting at with this, arguments shouldn’t be about getting one over on someone, they should be about improving mutual understanding. I’m just not putting effort into finding and posting a link nobody wants to see or thinks they could benefit from, that’s really it.
I apologize for rescinding the offer, but I’m going to do myself a favor and just block everyone and be done with it.


people in general don’t hate AI I swear guys
That’s not really the point, but whatever. Honestly this comment thread is exhausting, and I question whether anyone actually cares, which is why I don’t feel like taking the time to look up that information. But if you will tell me with full sincerity that you care whether it exists, I will try to find the link for you.


so having rules against AI on a platform is “vigilante enforcement”
I feel like you’re dramatically misinterpreting my statements on purpose now, this one is more obvious. I’m on the fence about whether disclosure requirements are a good idea, but am not emphatically condemning it, it’s understandable that they have them. But I am emphatically condemning efforts to use AI disclosures to brigade and harass developers, and I think the existence of those efforts is the reason why requiring disclosure is questionable.


Barely even trying to pretend to be honest
This is a bad thing.