According to Techmeme, X is rolling out its About This Account feature globally, allowing users to see the country or region where accounts are based by tapping the signup date on profiles. The feature immediately revealed numerous high-profile MAGA and Democrat accounts were actually operated from foreign countries like Russia and China. This follows reports from The Daily Beast and Hindustan Times exposing foreign actors posing as American political influencers. Meanwhile, iOS 27 is reportedly shaping up as a Snow Leopard-style update focused on bug fixes and performance with new AI features. And that Financial Times report about Tim Cook retiring? Apparently premature according to sources.
The transparency that backfired
Here’s the thing about transparency features – they often reveal more than platforms intend. X’s new location disclosure tool basically pulled back the curtain on what many suspected: significant political discourse is being shaped by foreign actors. We’re talking about accounts with thousands of followers, presenting themselves as grassroots American voices, suddenly showing they’re based in Moscow or Beijing.
And honestly, this feels like closing the barn door after the horses have already influenced an election or two. The feature rolled out globally within hours, which suggests X knew exactly what they’d find. But is showing a location enough? These operations are sophisticated enough to use VPNs and other methods to mask their true origins. So while it’s a step, it’s probably not the silver bullet.
The influence operation reality check
Look, we’ve known about foreign influence campaigns since 2016, but seeing it laid bare like this is still shocking. Accounts that were actively shaping political conversations, building followings, and presenting as authentic American voices turned out to be completely manufactured. This isn’t just about Russian troll farms anymore – we’re seeing operations from multiple countries targeting both sides of the political spectrum.
What’s particularly concerning is how long these accounts operated undetected. They built credibility over time, established patterns of behavior that seemed genuine, and blended into the ecosystem. The new feature basically confirms that our online political conversations are more compromised than many wanted to admit.
The platform responsibility question
So here’s my question: why did it take this long? X had the data about account locations all along. They knew where these accounts were accessing the platform from. The decision to finally surface this information feels reactive rather than proactive – like they’re responding to external pressure rather than leading on platform integrity.
And let’s be real – this is the same platform that’s been cutting trust and safety teams while promising more free speech. There’s a fundamental tension here between transparency and the platform’s business model. Engagement drives revenue, and some of these foreign accounts were definitely driving engagement. It’s convenient to blame “bots” while quietly benefiting from the activity they generate.
The broader implications
This isn’t just an X problem – it’s an internet problem. The structural issues that Gennaro Cuofano mentioned about misalignment in our economic system? They apply directly to social media. We’ve built systems that reward engagement without properly valuing authenticity or truth. AI is only going to make this worse, creating more sophisticated fake accounts that are harder to detect.
Basically, we’re in an arms race between platform security and bad actors, and the bad actors are innovating faster. The new location feature helps, but it’s like putting a bandage on a bullet wound. The real solution requires fundamental changes to how we design and moderate these platforms. And given the economic incentives involved, I’m not holding my breath.
