AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned.

In most test scenarios, large language models (LLMs) – the technology behind platforms such as ChatGPT – successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted.

The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a “Dolores park”.

In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence.

Study link - https://arxiv.org/abs/2602.16800

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    This is only possible if you have other SM where you’re not anonymous; which could have just as easily been linked to your anonymous accounts without the use of AI. If your entire online identity is anonymous, it has nothing to link your real identity to.

    Everyone who uses their real name on a website has never been truly anonymous. Ever.

    Shit used to be common knowledge that you do not use your real name and you do not share photos online. Now that’s what 90% of people exclusively use the internet to do.