• 1 Post
  • 173 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • The channels later returned to monetization when they started adding “fan trailer,” “parody” and “concept trailer” to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community.

    YouTube’s position is that the channels’ decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination.

    It sounds like they only banned them for not having a disclaimer that it isn’t a real trailer, so unless that deters people from clicking on them, I assume others will do the same thing just making sure to disclose.



  • can learn how to be rejected without getting violent, or even mildly annoyed, anyone can. The reason people don’t is because they don’t want to

    it’s because they’re pieces of shit and it reflects on their personality, and no one likes people who have a shitty personality

    Rejection makes people feel bad as a rule. That’s not an excuse for treating others badly, and there’s ways to learn to have a healthier mindset, but I think it’s worth mentioning that it’s ok for people to at least feel the way they do and that having the “wrong” emotions in response to things doesn’t make you a bad person. It just means you might have to work harder to make sure to treat others with decency.














  • I don’t hate this article, but I’d rather have read a blog post grounded in the author’s personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there’s this attempt to make it sound like there’s a lot of objective certainty to it that falls flat because of failing to draw a strong connection.

    Like this part:

    Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.

    While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.

    So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.

    Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that “infinite memory” features of personal AI assistants has on people, just rhetorical speculation.

    Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it’s going for the people that didn’t, and who use it for stuff like the given example of picking a restaurant to eat at.