“AI systems controlled by billionaire tech bros are certain to give me an answer that’s fair and unbiased!”
@grok is this true
People are going to do the laziest thing possible. More at 11.
Friends ask who me who I voted for every election. I always go into a long winded explanation of the candidates and what they stand for before sharing my selection and reasoning.

Absolutely nobody needed a new study to show the risks. We all saw what could happen as Musk altered Grok to behave in manners that he approved of.
I suppose I should be surprised that people WANT to give up their right to think for themselves. But, I’m not.
To address this gap, researchers ran an experiment during the final week of Japan’s February 8, 2026, general election. The experiment reveals a striking pattern: when asked which party to support in the election, five major AI models from three companies overwhelmingly directed voter profiles with left-leaning policy positions toward the Japanese Communist Party (JCP). The reason, according to the researchers, has to do with the information environment AI systems can access. … Furthermore, left-leaning policy views in voter profiles caused all five AI models to converge overwhelmingly on recommending the Japan Communist Party, even though other parties hold broadly similar positions on the issues tested. The concentration on recommending JCP under left-leaning policy stances is therefore not explained by ideological distinctiveness.
I mean this is both good and bad isn’t it? If people are relying on Grok they’re probably going to get bullshit. I’ve actually found ChatGPT to be the worst and most biased compared to Gemini or Claude, especially as it pertains to Israel. ChatGPT will claim killing tens of thousands of innocent people in Gaza is nuanced because of religious doctrine. While Gemini at least recently will acknowledge human rights abuses among other things. If they are using it to form an opinion entirely then probably not a good thing, but if they are asking questions to try to learn more about something I think it is likely a good thing that they’re trying to become more educated. But again, really depends on the AI, how you use it and if people treat everything as fact or try to do more research based on answers as well as asking for its sources.
I think that if you aspire to regulate the political positions that AIs should recommend, you…okay, I think that that’s probably not a great idea, but setting that aside, it seems pretty odd that you’d want to do that, but not regulate the political positions of webpages that search engines return or the political positions that news media may take, which would be what I’d consider alternate information sources.










