Published weekly by the Media Council of Kenya

Search
Viewpoint
TREND ANALYSIS
To the Editor
THE NEWS FILTER
Pen Cop
Off The Beat
Misinformation
Mediascape
Media Review
Media Monitoring
Literary Vignettes
Letter to the Editor
Guest Column
Fact Checking
Fact Check
Editorial
Editor's Pick
EAC Media Review
Council Brief
Book Review
Edit Template

Is NCIC’s social media monitoring plan a threat to privacy?

By Ben Serem

On January 20, Citizen TV reported that the National Cohesion and Integration Commission (NCIC) had developed a framework to monitor social media platforms for hate speech, incitement, and radicalisation. The initiative is expected to involve government agencies, civil society actors, and technology companies, with NCIC stating that the process will comply with Kenya’s Constitution and the Data Protection Act. Monitoring will cover TikTok, Facebook, X, Instagram, and WhatsApp, with artificial intelligence (AI) deployed to help identify harmful content.

At face value, the move appears timely and even necessary. Social media has become a powerful vehicle for mobilisation, propaganda, disinformation and, at times, ethnic hostility. NCIC’s mandate to promote national cohesion and prevent discrimination arguably requires the commission to keep pace with how conflict and division now unfold in digital spaces. However, for many Kenyans, the announcement also triggers a deeper concern: whether the state is quietly building a system that could expand surveillance, shrink civic space, and reshape free expression, especially as the country moves closer to the 2027 General Election.

Hate speech and incitement have long been serious threats to national peace and stability. In past election cycles, inflammatory speech in rallies, vernacular radios, or public forums has been linked to heightened tension and, in the worst cases, violence. Today, much of that rhetoric has migrated online, where it spreads faster, reaches wider audiences, and is amplified by algorithms that reward engagement over accuracy or social responsibility.

From this perspective, the commission’s monitoring framework is being framed as an early-warning system: a way to detect and respond to dangerous narratives before they escalate. The involvement of civil society and tech companies is presented as proof that the initiative is not purely a government operation but a multi-stakeholder response to a real national challenge. Still, the motivations behind such a move do not automatically remove legitimate fears about how it could be implemented.

Whether this warrants privacy concerns, the short answer is that Kenyans should at least pay attention. One of the biggest questions is what “monitoring” will practically mean. Public platforms like X, Facebook, and TikTok are easier to observe because much of the content is openly accessible. But WhatsApp, which is widely used in Kenya for political messaging, community organising, and everyday conversations, is more complicated because it uses end-to-end encryption. If monitoring extends into such spaces, it raises uncomfortable possibilities, including tracking patterns of sharing and infiltrating groups or, in worst-case scenarios, attempts to bypass encryption through broader surveillance mechanisms.

The second issue is AI. While AI can help detect patterns, it is not neutral. Automated systems often struggle with context, sarcasm, coded speech, local languages, and political nuance. In Kenya, where political debate can be heated and highly symbolic, an AI system may flag legitimate criticism, satire, or activist messaging as “harmful content,” especially if the definitions are vague. Without transparency, Kenyans may never know whether their posts were flagged due to actual incitement or simply because they challenged power.

Several red flags stand out and deserve scrutiny. The definitional ambiguity of terms like “hate speech,” “incitement” and “radicalisation” without a publicly available and legally grounded framework could lead to subjective enforcement. A system that is unclear in its rules is vulnerable to selective application.

Second is the risk of political weaponisation. Monitoring tools introduced under the banner of cohesion could, in the wrong hands, become tools for suppressing dissent or intimidating critics.

Third is accountability and oversight. It is not yet clear what independent mechanisms will exist to ensure the monitoring remains lawful and proportionate. If an account is flagged, content removed, or an individual targeted for investigation, what appeal process will exist? Who audits the AI? Who ensures the system is not being abused? Without safeguards, such programmes can operate with minimal transparency while deeply affecting citizens’ rights.

The NCIC’s announcement comes at a time when Kenya’s legal environment around online speech is already contested. In late 2025, amendments were made to the CyberCrimes Act, and critics described some provisions as ambiguous and vulnerable to misuse. Constitutionalist Ng’ang’a Muigai warned that such amendments could potentially be used to curtail freedom of expression and even provide justification for internet shutdowns during the 2027 elections.

Whether or not those fears materialise, the timing matters. Kenya is entering the long pre-election period where political messaging intensifies, propaganda networks become more organised, and controlling narratives gives a political edge. Historically, moments of political anxiety have often produced aggressive regulatory responses. In this context, the NCIC’s monitoring framework may be interpreted not only as a response to hate speech but also as part of a broader tightening of digital governance.

This is what makes the debate urgent. If Kenyans do not ask questions now about scope, oversight, transparency, data handling, and accountability, the framework could evolve quietly into something far more intrusive than originally presented.

Ben Serem is a media analyst at the Media Council of Kenya

Leave a Comment

Your email address will not be published. Required fields are marked *

Share this post

Sign up for the Media Observer

Weekly Newsletter

By signing up, you agree to our Privacy Policy

Scroll to Top