No. I was just interested because I’ve seen a lot of headlines about how Grok creates all these problematic images of women, minors etc., but not about other similar generative AI software. But my understanding so far has been that there’s no real way to safeguard generative AI at all, so I was wondering whether this was a Grok-only problem, and if so, how others were avoiding it.
Do other generative AI services also do this? Or do they have functioning safeguards?
Usually there are safeguards against nsfw content on public services.
Asking for a friend?
No. I was just interested because I’ve seen a lot of headlines about how Grok creates all these problematic images of women, minors etc., but not about other similar generative AI software. But my understanding so far has been that there’s no real way to safeguard generative AI at all, so I was wondering whether this was a Grok-only problem, and if so, how others were avoiding it.
🤨