Social media was meant to be a space for people to connect, share ideas, build communities, and participate in public life, a digital agora.
But what once brought us closer is now being overrun by automation.
Instead of amplifying real voices, algorithms are increasingly pushing artificial interaction.
Sinead Bovell argues that one of the biggest casualties of artificial intelligence might be social media, questioning whether it can still deliver on its original promise.
Social media platforms were designed to (profit from) human connection, but have become testing grounds for machine learning where content is generated, optimized, and circulated not for meaning, but for engagement.
When the bots talk to themselves
The problem isn’t just that bots are posting, they are now also interacting. Automated accounts debate politics, post commentary, and even engage with one another.
We are getting closer to social media feeds that no longer reflect collective thought, but pure algorithmic noise.
What once felt like a conversation, often toxic, but still somewhat human, has now become a feedback loop of outrage, memes, and synthetic empathy.
The social web is no longer social.
It is optimized for impressions and views, rather than communication.
Twitter’s fall from grace
No platform illustrates this decline better than X (formerly Twitter).
For years, it was a digital town square where journalists broke stories, critical information was shared and public debate unfolded in real time.
But after Elon Musk’s takeover, Twitter’s open exchange gave way to algorithmic amplification and paid verification.
According to The Washington Post, bot activity on X increased by more than 20 percent in 2023, with some networks generating “dozens of emotionally charged replies per minute.”
The result has been a steady erosion of trust. Authentic debate is buried beneath automated outrage, and users are left uncertain whether the accounts they interact with are bots or rage baiters, and even where are they from.
This has lead to human users imitating bot behaviour, exaggerating emotion and using formulaic hooks just to optimise content for some extra likes.
The human cost of automation
For those working in journalism, communications, or media production, the implications are serious. The online spaces where stories once surfaced and spread are now polluted by noise.
This makes human editorial judgment, the ability to verify, contextualise, and communicate with empathy, more critical than ever.
Video still feels human, for now
Among all formats, video has remained the most human.
Seeing a person speak, react, and express emotion creates a kind of digital intimacy that text alone cannot.
Video has allowed creators, journalists, and educators to build real connections with audiences that feel immediate and honest.
But as AI-generated avatars, voice cloning tools or automated translations become more sophisticated, even video faces a new credibility test.
Synthetic presenters, like the one recently introduced on Channel 4, can now mimic eye contact, tone, and gesture with unsettling accuracy.
Deepfake videos and AI news anchors already blur the line between performance and authenticity.
If the social web becomes flooded with machine-made video, the last bastion of genuine online connection could also be lost.
To protect the integrity of visual storytelling, creators and organizations can focus on small but effective habits:
– Keeping behind-the-scenes moments visible.
– Showing imperfections rather than editing them out.
– Being transparent about the tools they use.
Audiences respond to humanity, not polish.
Reclaiming social media
Despite everything, there is room for optimism.
Users are starting to value smaller, more intentional online spaces such as Mastodon, BlueSky, and newsletters, where communities grow through shared interest rather than algorithmic manipulation.
They are also learning to set healthier digital boundaries, curating feeds, following fewer but more trusted voices, and engaging with long-form content instead of endless scrolls.
On the creator side, journalists and communicators can rebuild trust by emphasizing transparency, collaboration, and the kind of storytelling grounded in lived experience, the kind AI cannot replicate.
If AI is becoming the first casualty of social media, it might be because it was never meant to be social to begin with. The more machines talk to one another, the more valuable authentic voices become.
The task now, for journalists, creators, and audiences alike, is to protect the digital agoras that remain and to keep them human.