It’s pretty simple: if it’s not important, who tf cares if it’s AI or not. If it’s important, there normally is a way to verify if the portrayed information is authentic because it will be important for others too (the info is the important part, not even if the medium where you got the info from is real or not). Life is easier this way, and important info should have been verified before AI too.
The info provided is that there exists another happy dog out there doing happy dog things and I briefly connected with it, which made me happy. This information would be incorrect if it was AI generated.
The information that there are happy dogs out there doing happy dog things isn’t wrong tho, regardless of how many images of happy dogs are real or fake.
You can also tell me that someone out there won the lottery this week and have it be true. It’s not the same as seeing this person’s live reaction to learning about it. It wouldn’t be the same if you watched that person act out the scene exactly as it happened. AI generated is so much further removed from all that.
I’d say its less removed - the person reenacting is acting on his own interpretation of what happened, while the generated images are the distilled versions of real people winning the lottery.
Also, like I said, life is much easier accepting that not everything that i can emotionally connect to has to be authentic. I can very much connect to the notion that there are happy pets out there even if i see a drawing of a happy pet.
I get what you’re saying, but I really miss shows that did re-enactments of actual events. That shit was funny as hell. Especially the ones on Operation Repo, which were re-enactments of entirely bullshit/exaggerated stories. Like the dude who took down a whole mob operation because he had to reposses a car. 🤣
But the problem exists if the average person can’t tell the difference or doesn’t care to verify it. Media literacy is at an all time low, at least in my country (guess). Without regulation, the presence and lack of labeling of AI content on social platforms can only further the decline.
That’s an age-old problem, just scaled up. There has always been misinformation in social media (and before that in every bar). In the US it’s especially bad, mostly because the GOP profits directly from misinfo and has done as much damage as possible to the education system to ensure it stays that way. That’s also the reason there wont be any legislation regarding labeling of AI content (which is preferrable, but not enforceable even today) coming from your continent in the next few decades, sorry :-(
That might still be a “good” thing. More people than before become aware that what they see in social media is not reality, but entertainment that might or might not be real. It could lead to a general rejection of the notion that SM shows the truth.
But all in all it is still of no importance if it’s imagery to give cozy feelings because of cute animals like in the meme. An entertaining story does not have to be true to be entertaining, and in the same vein a cute pet image doesn’t have to depict a real pet to be cute.
Sure. But I can make my own AI image of a cute dog, and where’s the satisfaction in that?
Hence, I think it cracks open a bigger issue than AI: the ‘illusion’ of authenticity on social media. Our squishy brains doomscroll with the fantasy that the stuff is real, and candid, and honest, and gems we found…
But that’s never really been true.
It’s largely staged content designed to go viral and make someone a buck. Or sell something. And it’s served by billion dollar algorithms designed to model and hijack your brain.
My hot take: people are upset that slop smashed that illusion with a hammer. Social media has been addictive fakeness for years; it’s just glaringly obvious now.
If someone tells me a entertaining story that connects with me emotionally, it’s not so much important if the story is true per se, it’s important that it’s told well. The storyteller might have invented the whole thing or based it on something similar and modified/exaggerated it, but that doesn’t take away from the story. If i tell myself a story it wouldn’t be satisfying either (if i’m not worldbuilding or an author, where the satisfaction has other sources).
It’s an interesting thought and would explain why people react so intensely. I for my part was very quickly picking up on the fakeness of facebook - when i was riled up during the arabian spring in Libya, i realized that i get easily emotionally manipulated by the served content, which made me quit.
Nowadays i know much better how to verify information that’s important to me; a dogs picture licking a cat which makes her purr will always emotionally positive for me, because a) it doesn’t matter outside of my satisfaction, just like the well told story, and b) i can’t check it for authenticity either way, so i do not care about authenticity.
I agree. It honestly makes me mad that people get in such a huff over using generative models for fiction; they’re just another generation of storytelling tools.
The issue is blurring fiction and reality.
This isn’t just a problem with AI. See: influencers, tabloids, and “news” that sell caricatures of reality.
But AI makes it too, too easy to distribute fakeness in spaces that are supposed to be real. That is very dangerous. And this is what it ended up being used for.
Nowadays i know much better how to verify information that’s important to me; a dogs picture licking a cat which makes her purr will always emotionally positive for me, because a) it doesn’t matter outside of my satisfaction, just like the well told story.
…I think I’ve used generative models enough to get desensitized to the “feel good” bit. I guess I felt like you once, but having peeked behind the curtain, the feeling has gone away.
But if they make you feel good, good. That’s what arts supposed to do.
Like i said in another answer, maybe that loss of confidence in the authenticity of what we see online has a positive effect in the future where people start rejecting what they see on the web as the truth and start believing in what authoritative people say again; i hope they start listening to their doctors, teachers and scientists again instead of grifters and con-men. In that case anonymous social media will find itself dead in the water, with media using verified and authenticated profiles winning out.
It might cause the combined stupidity - that made things like qanon possible - to fall apart into the small splinter cells of town idiots they were before.
It’s pretty simple: if it’s not important, who tf cares if it’s AI or not. If it’s important, there normally is a way to verify if the portrayed information is authentic because it will be important for others too (the info is the important part, not even if the medium where you got the info from is real or not). Life is easier this way, and important info should have been verified before AI too.
The info provided is that there exists another happy dog out there doing happy dog things and I briefly connected with it, which made me happy. This information would be incorrect if it was AI generated.
The information that there are happy dogs out there doing happy dog things isn’t wrong tho, regardless of how many images of happy dogs are real or fake.
You can also tell me that someone out there won the lottery this week and have it be true. It’s not the same as seeing this person’s live reaction to learning about it. It wouldn’t be the same if you watched that person act out the scene exactly as it happened. AI generated is so much further removed from all that.
I’d say its less removed - the person reenacting is acting on his own interpretation of what happened, while the generated images are the distilled versions of real people winning the lottery.
Also, like I said, life is much easier accepting that not everything that i can emotionally connect to has to be authentic. I can very much connect to the notion that there are happy pets out there even if i see a drawing of a happy pet.
I get what you’re saying, but I really miss shows that did re-enactments of actual events. That shit was funny as hell. Especially the ones on Operation Repo, which were re-enactments of entirely bullshit/exaggerated stories. Like the dude who took down a whole mob operation because he had to reposses a car. 🤣
But the problem exists if the average person can’t tell the difference or doesn’t care to verify it. Media literacy is at an all time low, at least in my country (guess). Without regulation, the presence and lack of labeling of AI content on social platforms can only further the decline.
That’s an age-old problem, just scaled up. There has always been misinformation in social media (and before that in every bar). In the US it’s especially bad, mostly because the GOP profits directly from misinfo and has done as much damage as possible to the education system to ensure it stays that way. That’s also the reason there wont be any legislation regarding labeling of AI content (which is preferrable, but not enforceable even today) coming from your continent in the next few decades, sorry :-(
That might still be a “good” thing. More people than before become aware that what they see in social media is not reality, but entertainment that might or might not be real. It could lead to a general rejection of the notion that SM shows the truth.
But all in all it is still of no importance if it’s imagery to give cozy feelings because of cute animals like in the meme. An entertaining story does not have to be true to be entertaining, and in the same vein a cute pet image doesn’t have to depict a real pet to be cute.
Sure. But I can make my own AI image of a cute dog, and where’s the satisfaction in that?
Hence, I think it cracks open a bigger issue than AI: the ‘illusion’ of authenticity on social media. Our squishy brains doomscroll with the fantasy that the stuff is real, and candid, and honest, and gems we found…
But that’s never really been true.
It’s largely staged content designed to go viral and make someone a buck. Or sell something. And it’s served by billion dollar algorithms designed to model and hijack your brain.
My hot take: people are upset that slop smashed that illusion with a hammer. Social media has been addictive fakeness for years; it’s just glaringly obvious now.
If someone tells me a entertaining story that connects with me emotionally, it’s not so much important if the story is true per se, it’s important that it’s told well. The storyteller might have invented the whole thing or based it on something similar and modified/exaggerated it, but that doesn’t take away from the story. If i tell myself a story it wouldn’t be satisfying either (if i’m not worldbuilding or an author, where the satisfaction has other sources).
It’s an interesting thought and would explain why people react so intensely. I for my part was very quickly picking up on the fakeness of facebook - when i was riled up during the arabian spring in Libya, i realized that i get easily emotionally manipulated by the served content, which made me quit.
Nowadays i know much better how to verify information that’s important to me; a dogs picture licking a cat which makes her purr will always emotionally positive for me, because a) it doesn’t matter outside of my satisfaction, just like the well told story, and b) i can’t check it for authenticity either way, so i do not care about authenticity.
I agree. It honestly makes me mad that people get in such a huff over using generative models for fiction; they’re just another generation of storytelling tools.
The issue is blurring fiction and reality.
This isn’t just a problem with AI. See: influencers, tabloids, and “news” that sell caricatures of reality.
But AI makes it too, too easy to distribute fakeness in spaces that are supposed to be real. That is very dangerous. And this is what it ended up being used for.
…I think I’ve used generative models enough to get desensitized to the “feel good” bit. I guess I felt like you once, but having peeked behind the curtain, the feeling has gone away.
But if they make you feel good, good. That’s what arts supposed to do.
Like i said in another answer, maybe that loss of confidence in the authenticity of what we see online has a positive effect in the future where people start rejecting what they see on the web as the truth and start believing in what authoritative people say again; i hope they start listening to their doctors, teachers and scientists again instead of grifters and con-men. In that case anonymous social media will find itself dead in the water, with media using verified and authenticated profiles winning out.
It might cause the combined stupidity - that made things like qanon possible - to fall apart into the small splinter cells of town idiots they were before.