I. Introduction: The Imperative of AI and Biometric Data Regulation in Kenya
The rapid advancement and pervasive integration of Artificial Intelligence (AI) and biometric technologies are fundamentally reshaping societies and economies globally, including Kenya. From enhanced security systems to personalized services, these innovations promise significant benefits. However, alongside these advancements, there is a growing chorus of concern regarding the unregulated or inadequately regulated use of these powerful tools, particularly facial recognition and AI surveillance. The potential for misuse, privacy infringements, and erosion of fundamental rights necessitates urgent and robust regulatory intervention.
The current regulatory framework in Kenya, while providing a foundation for data protection, struggles to keep pace with the exponential growth and complexity of AI and biometric technologies. This “regulatory lag” is a primary driver of the concerns surrounding unregulated use, increasing the risk of unmitigated harms. Without proactive, AI-specific legislation, Kenya faces the risk of becoming a testing ground for technologies with profound societal implications, potentially leading to irreversible privacy breaches or systemic discrimination before adequate safeguards are in place. This report will articulate why comprehensive regulation of AI and biometric data is not merely a legal formality but a critical imperative for safeguarding privacy, security, and civil liberties within the Kenyan legal landscape, grounded in the provisions of the Data Protection Act, 2019.
II. Understanding Biometric Data and AI: Definitions and Inherent Risks
A. Defining Biometric Data under Kenyan Law
The Kenyan Data Protection Act, 2019 (DPA) provides a clear and comprehensive definition of biometric data. It is defined as “personal data resulting from specific technical processing based on physical, physiological or behavioral characterization including blood typing, fingerprinting, DNA analysis, earlobe geometry, retinal scanning and voice recognition”.1 This broad scope ensures that various unique identifiers are covered under the Act.
Crucially, the DPA classifies biometric data under the “sensitive category of personal data”.1 This classification is significant because it mandates a higher standard of protection and stricter conditions for processing. Section 45 of the DPA stipulates specific legal bases for processing sensitive biometric data, which include processing in the course of legitimate activities of a foundation, association, or NGO with a political, philosophical, religious, or trade union aim; data made public by the data subject; processing necessary to establish, exercise, or defend a legal claim; or processing necessary to protect a data subject’s or another person’s vital interest.1 For instance, employers are advised to refrain from processing biometric data for monitoring employee attendance due to its substantial privacy impact, with recommendations for less intrusive means such as radio frequency identification cards or manual sign-in sheets.1
B. The Nature of AI and its Intersect with Biometrics
AI systems possess an unparalleled capacity to process, analyze, and derive insights from vast datasets, including complex biometric information like facial geometry and voice waves.2 This capability enables sophisticated applications such as facial recognition and predictive analytics.
However, the intersection of AI with biometric data introduces unique vulnerabilities. Unlike traditional passwords that can be reset if compromised, biometric data, once stolen or breached, “cannot be easily changed or reset”.3 This inherent immutability creates “long-term security risks,” rendering individuals perpetually “vulnerable to identity theft, surveillance, and misuse”.3 A compromise involving biometric information presents challenges that extend significantly beyond simple data breaches.4 The combination of AI’s processing power and the irreversible nature of biometric data elevates the risk profile from mere data breaches to a fundamental, long-term threat to an individual’s identity and security. This creates an irreversible harm multiplier effect, making regulation not just about protecting data, but about safeguarding the very essence of individual identity in the digital realm. This unique vulnerability demands more stringent regulatory controls, including robust security protocols, data minimization, and strict retention policies, as the consequences of failure are far more severe and lasting than with other forms of personal data.
III. The Kenyan Legal Framework: Data Protection Act, 2019 and its Application
A. Core Principles and Rights under the DPA
The DPA, modeled after the EU’s General Data Protection Regulation (GDPR), establishes robust data protection standards in Kenya.5 It mandates adherence to principles such as lawful, transparent, and fair processing, purpose limitation, data minimization, and data accuracy.5 These principles are fundamental to ethical data handling.
The DPA grants several crucial rights to data subjects, empowering individuals with control over their personal information 5:
- Right to be Informed: Data subjects must be informed about how their personal data is used, including details of automated decision-making and risks of cross-border transfers.5
- Right to Access: Individuals have the right to access their personal data held by data controllers or processors within 7 days of request.5
- Right to Object: Data subjects can object to the processing of their personal data unless compelling legitimate interests override their rights. This right is absolute for direct marketing purposes.5
- Right Not to be Subjected to Automated Decision-Making: A cornerstone for AI regulation, this right ensures individuals are not subject to decisions based solely on automated processing, including profiling, that produce legal effects concerning them or significantly affect them.5 This includes the right to human intervention and expression of views.5
- Right to Rectification and Erasure: Data subjects can request correction of inaccurate data or deletion of false or misleading data within 14 days.5
- Right to Data Portability: Data subjects can receive their data in a structured, machine-readable format and transmit it directly to another controller if technically feasible.5
While the DPA provides a strong foundational legal framework for data protection, its general nature means it does not fully address the specific, complex, and rapidly evolving challenges posed by advanced AI systems. Although the “Right Not to be Subjected to Automated Decision-Making” is crucial, it may not encompass all nuances of AI’s influence, such as AI-assisted decisions where human review is perfunctory, or complex profiling that does not directly lead to a “legal effect” but significantly impacts an individual. This suggests a need for AI-specific legislation or detailed regulatory guidance to interpret the DPA in the context of modern AI. Without this specific AI legislation or guidance, there is a risk of regulatory arbitrage, where AI developers and deployers might exploit ambiguities or gaps in the DPA, leading to unintended consequences and undermining the spirit of data protection principles in the AI context. This could also stifle innovation if companies are unclear on compliance requirements for novel AI applications.
B. Regulatory Oversight and Enforcement
The DPA established the Office of the Data Protection Commissioner (ODPC) as the primary regulatory body responsible for overseeing the Act’s implementation and enforcement.5 The ODPC is expected to keep pace with developing technologies and risks.7 Organizations are mandated to register with the Commissioner, identify lawful processing grounds, implement data protection principles, assess high-risk processing, safeguard data, and report breaches within 72 hours (48 hours for processors to notify controllers).5 They must also appoint an independent, qualified Data Protection Officer and maintain audit trails and data journeys to demonstrate compliance.8
A strong legal framework is only effective if it can be adequately enforced. The DPA, despite its robust design, faces significant practical enforcement hurdles due to resource constraints, technical capacity gaps within the ODPC, and low public digital literacy.7 These limitations directly impede the ODPC’s ability to effectively monitor, investigate, and penalize non-compliance, particularly in the complex and technical domain of AI and biometrics. Low public awareness also translates to fewer complaints and less pressure for compliance. This creates a de facto regulatory vacuum, even where legal provisions exist, allowing unregulated use to persist. This suggests that simply enacting laws is insufficient; effective AI and biometric data regulation in Kenya requires substantial investment in the ODPC’s technical capabilities, human resources, and public education campaigns to empower citizens to exercise their rights and report violations. Without this, the DPA’s protective intent may remain largely aspirational.
C. Current Application to Biometric Data
The processing of sensitive biometric data must align with Sections 44 and 45 of the DPA.1 Furthermore, employers are explicitly advised to “refrain from processing the data for the purposes of monitoring employee attendance and subsequently using the attendance to justify the remuneration” due to substantial privacy impact.1 Less intrusive means like radio frequency identification cards or manual sign-in sheets are recommended as alternatives.1 This highlights a practical application of data minimization and proportionality principles within the existing legal framework.
Table 1: Key Provisions of the Kenyan Data Protection Act, 2019 Relevant to Biometric Data and AI
| DPA Section/Regulation | Provision/Right | Relevance to Biometric Data / AI | Snippet ID(s) |
| Sections 44, 45 | Processing of Biometric Data / Sensitive Personal Data | Explicitly defines biometric data as sensitive and outlines legal bases for processing, mandating higher protection standards. | 1 |
| Section 25 | Data Protection Principles (Lawful, Transparent, Fair Processing; Purpose Limitation; Data Minimization; Data Accuracy) | Fundamental for ethical AI development and biometric data handling; ensures data is collected and used responsibly, limiting scope and ensuring integrity. | 5 |
| Section 43 / Regulation 22 | Right Not to be Subjected to Automated Decision-Making | Directly addresses AI’s impact on individuals, requiring human intervention and transparency in automated decisions to prevent adverse legal or significant effects. | 5 |
| Regulation 12(3) | Right to Erasure | Allows individuals to request deletion of their data, critical for biometric data which is immutable once compromised, providing a mechanism for control. | 5 |
| Regulation 49 | Data Protection Impact Assessment (DPIA) Requirement | Mandates DPIAs for high-risk processing, including automated decision-making with significant effects and large-scale use of biometric/genetic data, crucial for proactive risk mitigation in AI. | 5 |
IV. Why Regulation is an Issue of Concern: Impacts of AI and Biometric Data
A. Privacy and Security Implications
The most significant concern regarding AI and biometric data is the permanent nature of biometric data compromise. Unlike passwords, stolen fingerprints or facial patterns cannot be changed, leaving individuals vulnerable to identity theft, fraud, and persistent surveillance for life.3 Real-world examples, such as the Biostar 2 breach, demonstrate how weak safeguards can expose sensitive biometric data, allowing unauthorized access and manipulation.3
A widespread issue is the lack of understanding among individuals about how their biometric data is collected, stored, and protected. Organizations often fail to provide clear information on encryption practices or data handling policies, leading to apprehension and skepticism.3 Many users overlook the fine print in terms of service agreements, allowing companies to collect data with minimal transparency.4
Biometric systems, particularly facial recognition, inherently raise concerns about mass surveillance without explicit knowledge or consent.3 There is a significant fear that data collected for one purpose, such as security, could be misused for tracking individuals across different locations, leading to a profound loss of control over private lives and creating an environment where people feel constantly watched and scrutinized.3
B. Ethical and Societal Concerns
A critical ethical issue is the frequent absence of truly informed consent. Users often do not fully understand how their biometric data will be collected, stored, and used, nor are they always aware of the potential risks or alternatives.3 Consent should be explicit, voluntary, and allow individuals to opt-in or opt-out with clear information on data utilization.4
AI systems, especially facial recognition technology, are prone to perpetuating and even exacerbating existing societal biases. Studies consistently show that facial recognition is “least reliable for people of color, women, and nonbinary individuals”.4 Error rates for darker-skinned women can be significantly higher than for light-skinned men, with one study reporting 34.7% versus 0.8%.10 This effectively “automates discrimination” and is particularly dangerous when used by law enforcement.10 When a biased technology is deployed within an already biased system, it does not just reflect existing biases; it automates and amplifies them. The technology, often assumed to be objective, provides a veneer of scientific neutrality to discriminatory practices, making them harder to challenge. The unregulated use of biased AI and biometric tools transforms individual instances of discrimination into systemic, automated discrimination, creating a feedback loop where flawed technology reinforces and exacerbates existing societal inequalities, leading to a chilling effect on civil liberties and disproportionate negative impacts on vulnerable groups. This necessitates not just data protection but also robust ethical guidelines, mandatory bias audits, and human oversight for AI systems, particularly in high-stakes applications like law enforcement and public services, to prevent the automation of injustice.
Beyond breaches, there is a substantial risk of data misuse by cybercriminals, rogue employees, or even the collecting institutions themselves.3 If AI systems lack robust security, sensitive information becomes a prime target, leading to reputational damage and loss of trust.4
V. The Alarming Reality of Unregulated Facial Recognition and AI Surveillance
A. Erosion of Civil Liberties
Unregulated facial recognition surveillance presents an “unprecedented threat to our privacy and civil liberties”.11 It empowers governments and companies to “spy on us wherever we go tracking our faces at protests, political rallies, places of worship, and more”.11 This constant monitoring leads to a “chilling effect on free speech and privacy,” as individuals may self-censor or avoid public activities for fear of being identified and tracked.4 Such surveillance violates fundamental constitutional rights, including the right to privacy and protection against unreasonable searches and seizures.10 The lack of consent for being tracked by facial recognition technology is a direct challenge to individual autonomy.10
B. Risks of Misidentification and Wrongful Outcomes
Faulty facial recognition matches have already led to severe real-world consequences. Robert Williams, a Black man in Michigan, was wrongfully arrested and detained for 30 hours due to a false face recognition match.11 Similarly, Kylese Perryman in Minnesota was falsely arrested based solely on incorrect facial identification.10
The technology’s inherent bias means that people of color are “more likely to be misidentified” 11 and thus more prone to wrongful arrests and being forced into lineups.10 Police surveillance cameras are disproportionately installed in Black and Brown neighborhoods, further exacerbating systemic racism.10 Beyond racial bias, the technology can target and identify vulnerable groups like immigrants and refugees, leading to detention and deportation on an “unprecedented scale”.10 The unregulated deployment of facial recognition technology fundamentally shifts the burden of proof in the justice system, implicitly establishing a dangerous precedent where individuals are presumed guilty based on flawed algorithmic identification. This undermines the core tenets of due process and justice. This erosion of fundamental legal principles demands immediate and stringent regulation that mandates transparency, human review, and clear avenues for challenging AI-driven decisions, particularly in law enforcement and judicial contexts, to prevent the automation of injustice.
C. Lack of Accountability and Transparency
Law enforcement agencies often use facial recognition technology “in secret and without any democratic oversight”.11 This lack of transparency prevents accountability and makes it difficult to challenge abuses.11 There are significant concerns about the absence of safeguards to prevent rights violations, with agencies potentially abusing surveillance authorities to spy on protesters or political opponents.11 Courts are beginning to flag the absence of detailed information about the use of face recognition technology in legal cases, ruling that such failures violate defendants’ constitutional rights.11
Table 2: Risks and Societal Impacts of Unregulated AI and Biometric Surveillance
| Category of Risk | Specific Impact / Concern | Example / Consequence | Snippet ID(s) |
| Privacy / Security | Irreversible Data Compromise | Biometric data cannot be reset, leading to long-term vulnerability to identity theft, fraud, and persistent surveillance. | 3 |
| Civil Liberties | Mass Surveillance / Chilling Effect | Governments/companies can track individuals at protests, political rallies, places of worship, leading to self-censorship and reduced public participation. | 4 |
| Discrimination | Algorithmic Bias / Misidentification | Higher error rates for people of color, women, and nonbinary individuals, leading to wrongful arrests and exacerbating systemic inequalities. | 4 |
| Accountability | Lack of Transparency / Oversight | Law enforcement uses technology in secret, without democratic oversight, hindering accountability and due process. | 11 |
VI. Projecting Future Impacts: The Consequences of Inaction
A. Escalation of Privacy Violations
Without robust regulation, the coming years will likely see an exponential increase in widespread tracking and profiling of individuals across public and private spaces. AI’s capacity to correlate vast amounts of biometric and personal data will lead to a pervasive loss of individual autonomy, where every movement and interaction could be monitored and analyzed. The immutable nature of biometric data, combined with increasingly sophisticated AI-powered cyberattacks, means that future data breaches could have catastrophic and irreversible consequences for individuals’ financial security, identity, and personal safety. The lack of ability to reset compromised biometrics will make victims perpetually vulnerable.
B. Deepening Societal Inequalities
If left unchecked, algorithmic biases embedded in AI systems, particularly facial recognition, will become deeply entrenched in critical sectors such as employment, housing, credit, and the justice system. This will automate and scale discrimination, disproportionately impacting marginalized communities and creating systemic barriers to opportunity and fairness. AI and biometric surveillance tools could be weaponized to target and marginalize specific groups, such as political dissidents, ethnic minorities, or vulnerable populations, on an unprecedented scale, leading to severe human rights abuses.
C. Erosion of Public Trust and Democratic Processes
A pervasive fear of surveillance will inevitably undermine public trust in institutions and civic spaces. Individuals may become hesitant to participate in protests, political rallies, or even public discourse, leading to a chilling effect on democratic freedoms. The ability to collect and analyze vast amounts of personal and biometric data could be exploited for political manipulation, targeted disinformation campaigns, or even social control, posing a significant threat to the integrity of democratic processes and the fabric of society. The unchecked proliferation of AI and biometric data collection creates an environment ripe for surveillance capitalism, where personal data is continuously extracted and analyzed for commercial or governmental control. This commodification of personal identity, combined with the chilling effect on public participation, directly undermines the foundations of a free and democratic society. If left unregulated, the pervasive use of AI and biometric data will not only lead to individual privacy violations but will fundamentally reshape the relationship between citizens and the state or corporations, fostering a climate of constant surveillance that erodes public trust, chills dissent, and ultimately compromises democratic processes by making individuals feel perpetually watched and potentially manipulated. This necessitates a regulatory approach that goes beyond mere data protection to address the broader societal and democratic implications of AI, focusing on principles of human autonomy, democratic accountability, and the prevention of mass surveillance infrastructure.
VII. Bridging the Gaps: Recommendations for a Robust Regulatory Framework
A. Addressing Gaps in Kenya’s Current Legal Landscape
While the DPA is a strong foundation, Kenya “lacks specific AI laws”.8 A dedicated AI law is crucial to address unique challenges such as algorithmic bias, accountability for AI-driven harms, and ethical considerations that are not fully covered by general data protection principles.8 This legislation should incorporate risk categorization based on human rights, similar to global frameworks.8 The DPA’s provisions, particularly the “Right Not to be Subjected to Automated Decision-Making,” need clearer interpretation and enforcement in the context of complex AI systems. This could include making Data Protection Impact Assessments (DPIAs) mandatory for all high-risk AI/biometric systems, as current deployments often “lack comprehensive legal safeguards” in this area.5
B. Policy and Technical Safeguards
Regulations should mandate that privacy is integrated into AI systems and processes “from the beginning”.7 This includes promoting “human-in-the-loop” principles, ensuring human accountability for AI-assisted innovations, and allowing for human intervention in automated decisions.5 Organizations must be required to collect “only the necessary data” 3, encrypt biometric data “both in transit and at rest” 3, and ideally process biometric data “directly on the user’s device” to minimize server-side storage and breach risks.3 Long-term storage of biometric data should be avoided.3 Clear and transparent data management policies are essential, detailing how data is collected, stored, used, and shared, including any third-party involvement.3 This builds trust and empowers users to make informed decisions.
C. Capacity Building and Public Awareness
Regulating AI requires specialized expertise in technology, law, and ethics.9 Kenya faces a “shortage of AI specialists within regulatory bodies”.9 Significant investment in training and capacity building for regulators, policymakers, and law enforcers is crucial for effective AI governance.9 A major challenge is the “low public digital literacy” and lack of awareness about how AI systems operate and their implications.8 Public education initiatives are vital to ensure citizens understand their rights, the risks, and benefits of AI, enabling informed adoption and fostering trust.7 AI operates beyond national borders, making regulation complex.9 Kenya’s AI regulations must align with global standards and regional governance frameworks to address challenges like cross-border data flows and ensure coherence.9
Effective AI and biometric data regulation in Kenya requires a holistic, multi-stakeholder approach that simultaneously develops robust legal frameworks, mandates and promotes strong technical safeguards (e.g., privacy-by-design), and invests heavily in capacity building for regulators and digital literacy for the public. A failure in any one pillar compromises the entire regulatory ecosystem. This implies that policy efforts should not be siloed. For instance, drafting AI-specific legislation should go hand-in-hand with funding for the ODPC’s technical teams and national digital literacy campaigns. This integrated strategy is essential for Kenya to “harness AI’s potential while safeguarding ethical considerations and public interests”.9
VIII. Safeguarding Kenya’s Digital Future
The rapid proliferation of AI and biometric technologies presents both immense opportunities and profound challenges. As concerns over unregulated facial recognition and AI surveillance continue to mount, it is unequivocally clear that robust, forward-looking regulation is not merely desirable but critically urgent for Kenya. The goal is not to stifle innovation but to ensure that technological advancement proceeds in a manner that respects fundamental human rights, protects individual privacy, and fosters public trust. This requires a delicate balance, ensuring accountability, transparency, and fairness in the design, deployment, and use of AI and biometric systems.
Kathurima N Advocates is a specialist in this dynamic and evolving field, offering expert guidance on all matters related to AI, Biometric Data Regulations, Laws, and compliance requirements for startups and established entities.
Sources
medium.comPrivacy Concerns in the Age of AI: The Risks of Biometrics and Personal Data Usage Opens in a new window securiti.aiNavigating Kenya’s Data Protection Act: What Organizations Need To Know – Securiti.ai Opens in a new window southendtech.co.keAre You Handling Biometric Data in the Workplace? – South-End Tech Opens in a new window aclu-mn.orgBiased Technology: The Automated Discrimination of Facial … Opens in a new window aclu.orgThe Fight to Stop Face Recognition Technology | American Civil … Opens in a new window identity.comPrivacy Concerns With Biometric Data Collection – Identity.com Opens in a new window odpc.go.keOFFICE OF THE DATA PROTECTION COMMISSIONER … – ODPC Opens in a new window munyalo.comAI in Kenya: Navigating the Future of Technology and Regulation | Nthuli Munyalo Opens in a new window blog.bake.co.keChallenges in Implementing AI Regulations in Kenya – BAKE – Opens in a new window securiti.aiKenya Data Protection Act Compliance Solution – Securiti.ai Opens in a new window theelephant.infoDENNIS ONDIEKI – Safeguarding Data Rights in the Information Age – The Elephant