Tuning the Algorithm: In Dublin, TikTok Offers a Dimmer Switch for AI — But Not an Off Button
On a gray Dublin morning, beneath the low hum of a busy office block and the scent of coffee, TikTok opened its doors to reporters, creators and safety experts from across Europe for what felt, at times, less like a corporate brief and more like a public conversation about the future of attention.
The company’s announcement was straightforward: a new control that lets users tell TikTok how much AI-generated content they want to see in their “For You” feeds. It’s as if the app handed millions an old-fashioned dimmer switch for the algorithm — except you can’t switch the light off completely.
A slider, not a shutdown
The feature, revealed at the European Trust and Safety Forum in TikTok’s Dublin office, offers a choice along a continuum from “see less” to “see more.” There is no option to block AI-generated content entirely.
“We want people to be able to shape their experience,” said a TikTok spokesperson at the event. “But we also believe AI can power creativity and discovery, so the approach is about moderation and transparency rather than elimination.”
A Dublin-based creator, Aoife Murphy, who makes short documentaries about urban life, told me, “It’s nice to have a choice. I’m worried about deepfakes and about kids thinking AI-made clips are real. But I also love the AI tools that help me edit faster. This lets me keep the good and dodge the weird.”
What the control does — and doesn’t — do
On paper the update sounds simple; in practice it raises messy questions about autonomy, curation and the invisible architecture of attention economy platforms. Here’s what TikTok says the new control will do:
-
Allow users to indicate a preference for more or less AI-generated content in their feeds;
-
Include continued investment in labelling AI content across the app;
-
Back an educational push — a $2 million (€1.73m) fund — for experts to produce AI literacy material.
“Giving people a lever is progress,” said Dr. Maren Vogel, a digital-safety researcher in Berlin. “But if the lever only nudges rather than empowers full choice, we need to look closely at how that nudging shapes what you see and who benefits.”
Wellbeing features: badges, missions and late-night scrolling
Alongside AI controls, TikTok rolled out what it calls a “Time and Wellbeing Space.” It’s designed as a digital alcove where users can attempt “wellbeing missions” — practical nudges like sticking to screen-time limits or avoiding nighttime scrolling — and earn badges for meeting those goals.
“It’s sort of gamified mindfulness,” said Zara O’Connor, a content creator focused on mental health education. “If earning a badge helps a teenager shut their phone an hour earlier, that’s a win. But badges are not therapy, and we shouldn’t let shiny rewards hide deeper structural problems.”
Concerns about TikTok and young people’s mental health are well-documented. Researchers and advocacy groups have repeatedly warned about the app’s power to amplify extreme content, encourage addictive patterns of use, and distort young users’ sense of reality.
Safety by numbers
TikTok has been trying to answer some of those criticisms with data. At the Dublin event it disclosed that more than 6.5 million videos were removed in the first half of this year for violating its rules, and that it had taken down “more than 920 accounts dedicated to spreading hate.” The platform also pledged more transparency around how it handles violent and hateful material.
“Removing content is necessary but not sufficient,” cautioned Dr. Vogel. “Scale matters. There are hundreds of millions — even more than a billion — of users globally on platforms like this, and content moderation is always a race between human intent and machine scale.”
The trust gap and the question of agency
For many users and regulators, the core tension is simple: platforms built on algorithmic curation increasingly rely on AI to create and surface content. People want autonomy, but companies have incentives to maximize engagement. The new TikTok control acknowledges that tension, but stops short of ceding full agency to users.
“A slider is a start, but it’s also symbolic,” said Isabelle Laurent, a policy expert who has advised European regulators. “Regulators want to know: can consumers truly opt out of machine-generated influencers, synthetic media, or content prioritized by economy-driven prompts? Sliders might feel like empowerment, but they are still company-controlled settings.”
Across the room in Dublin, a teenage creator named Luca summed it up more bluntly: “I don’t want the app deciding what’s real for me. But I also don’t want to switch platforms. This is trying to meet me halfway.”
Cultural texture: Dublin as backdrop
The choice of Dublin for the forum reflected more than geography: Ireland is home to many tech firms’ European headquarters, and its café-lined streets and late-night pubs provide a strange comfort for policy wonks and creators who fly in from across the continent. In conversation, people kept returning to local details — the cadence of Irish English, the ease of finding a quiet study corner in a hostel, the way a brisk walk along the Liffey clears the head.
“Technology conversations happen in abstract,” said Seán Ní Ríordáin, a community organiser who runs digital-literacy workshops in Dublin. “But when you bring them here, in a city that’s both global and intimate, you hear different worries: parents asking how to explain AI to a nine-year-old, teachers asking for lesson plans.”
What this means for the wider debate
TikTok’s moves are part of a broader pattern: platforms are under pressure to offer users greater transparency and control, while governments and civil-society groups push for stronger rules. The company’s $2m literacy fund signals a willingness to invest in education, but it also raises questions about who gets to define literacy and how much responsibility falls on private companies versus public institutions.
So where does that leave the rest of us — creators, parents, policymakers, casual scrollers who open the app with a cup of coffee and ten minutes to spare?
We are being invited to participate in a negotiated future where AI is baked into the media we consume. That’s both exhilarating and unnerving. It’s an opportunity to insist on better labelling, stronger opt-out mechanisms and more public investment in critical thinking. And it’s a moment to ask whether a slider is enough when what’s at stake is how an entire generation understands truth, creativity, and attention.
“We can’t outsource civic education to platforms,” Dr. Vogel said. “But we can push companies to be better partners in the work.”
After the forum: a call to action
If you use TikTok — or any service that filters content through AI — take a moment to look at the settings. Try the slider. Talk with your family about what it means when a video can be crafted by code as easily as by people. Ask your local schools whether they teach media literacy. Ask your representatives whether the rules keep pace with the tools.
Because at the end of the day, a platform can hand you a choice. It’s up to society to decide what choices are meaningful.










