Navigating the AI Governance Landscape: Why it Matters to your Organization

At a glance

  • With more AI tools and systems being developed and adopted, the regulatory environment is slowly catching up as seen in the development of the African Union Continental AI Strategy and the Kenya AI strategy (2025-2030).
  • It is essential for organizations to ensure responsible AI development and use by taking into account the globally accepted core principles of AI governance including data protection, transparency, human oversight, fairness and bias awareness.
  • Implementation of AI governance frameworks by organizations can be done through adoption of AI use policies and guidelines, encouraging risk assessment of AI tools, conducting AI awareness trainings and ensuring compliance with existing applicable legal frameworks.

Artificial intelligence (AI) has emerged as a transformative force across industries not just in Africa but across the globe. As AI is more widely adopted through various use cases in Kenya, and more broadly in Africa, it is important for organizations, from companies, non- profits and public sector institutions, to proactively implement robust AI governance measures to leverage AI responsibly and ethically.

Regulatory Environment

The Kenya AI Strategy (2025-2030) (the Strategy)  launched in March 2025, is a critical first step to establishing a foundation to understand AI technologies’ impact in the country, and to push for more responsible AI innovation and development. 

The Strategy follows the existing African Union Continental AI Strategy adopted in 2024, and a number of AI strategies and policies adopted in other African countries such as Rwanda, Senegal, Egypt, South Africa, to mention a few. It also complements various international and regional standards applicable to ethical AI, including the UNESCO Recommendation on Ethics of AI and the OECD AI Principles, which espouse widely accepted and adopted core principles of AI governance.

The Strategy acknowledges opportunities for AI to positively drive socio-economic development, enhance public services and promote inclusivity and fairness. At the same time, several challenges present themselves including data protection risks, concerns on data sovereignty, infrastructure gaps, talent shortages, and low levels of public trust. Such challenges necessitate the need for ethical principles and guardrails to ensure the responsible use of AI technologies. A breakdown of the core ethical principles are discussed below.

Core Principles of AI Governance

The international and regional standards discussed above highlight certain widely applicable principles for ethical AI development and adoption such as data protection, fairness and non- discrimination, accountability, transparency and human oversight.

  1. Data protection 

Data protection emerges as a critical concern in AI training and development as this involves the use of vast amounts of datasets throughout the life cycle of AI systems. AI developers and organizations adopting AI tools must prioritize comprehensive data protection strategies and stringent data handling protocols outlined in data protection regulations such as the Kenyan Data Protection Act (2019). For example, it is important to ensure standards on the collection and use of personal data being done correctly and with a valid legal basis. Data subject rights for processing of personal data such as informed consent must also be considered. 

It is necessary to conduct Data Protection Impact Assessments (DPIAs) in the development of such algorithmic tools to deal with potential privacy risks, whilst also ensuring a privacy by design approach. 

  1. Transparency and human oversight 

Effective AI governance requires a commitment to transparency and explainability of AI systems. It should be clear and transparent to the end user where an AI system has been utilized particularly in making decisions affecting their rights. 

Explainability of AI outputs or outcomes, however, is currently a challenge with AI applications’ decision-making being subject to the black box phenomenon, meaning it is not clearly understood how the tool reasoned to reach a particular decision. In such instances, it is necessary for human oversight to be maintained for critical or sensitive AI use cases. This addresses errors by AI systems and fosters transparency and human trust.

Organizations must develop mechanisms that ensure AI systems remain transparent, explainable and accountable, with clear documentation of decision-making processes. 

  1. Fairness and bias awareness

One of the most significant challenges in AI implementation is addressing data bias related to algorithmic decision making. As AI tools may be majorly trained on datasets from specific regions or countries, it is easy for the systems to be biased or skewed in their determinations. 

Organizations must recognize the impact of limited datasets on the accuracy and fairness of automated decision-making tools. Use of datasets lacking diversity can therefore perpetuate systemic biases, particularly in sensitive use cases such as employment recruitment processes, financial services, and healthcare. 

Implementing AI Governance Frameworks

The widely accepted core principles discussed above provide a basis by which organizations can consider implementation of AI governance frameworks. Focus can be placed on developing organizational AI use policies, encouraging risk assessment in development and adoption of AI tools, conducting AI awareness trainings and ensuring compliance with existing applicable legal frameworks.

Developing AI Use Policies

AI is being used by various organizations, whether at the basic level of generative AI chatbots such as OpenAI’s ChatGPT and Anthropic’s Claude, to more advanced AI tools conducting automated decision-making tasks for sensitive use cases.

Organizations should develop detailed AI policies that outline ethical guidelines, risk assessment protocols, and clear accountability mechanisms to guide the use of AI tools, with regard to whether such use is internal or external. This can be done through developing a simpler policy specific to the use of generative AI systems, or a broader AI usage policy that covers utilization of generative AI and other AI applications.

Leveraging Existing Compliance Frameworks

AI development and adoption is not specifically regulated in Kenya or the wider African region, but may still be subject to other existing sector specific regulations. A good example is data protection regulations which are highly applicable to AI tools as they rely heavily and are trained on large volumes of data, and can consist of sensitive personal data. It is therefore imperative for organizations developing or considering the use of AI tools within their functions to keep in mind their respective existing compliance obligations.

Risk Assessment Measures

Risk assessment measures could also be adopted with the objective of creating guardrails to protect both the organization and its stakeholders. Conducting thorough AI risk assessments is crucial in the development and deployment of AI tools. Organizations considering AI adoption should also carry out vendor due diligence on the AI application suppliers and undertake risk assessments on the functioning of the specific tools adopted. 

This is particularly critical for organizations operating in regulated markets, such as financial services and healthcare, where AI solutions interact with highly sensitive personal data. Notably, the risk-based approach has been utilized as a regulation tactic in certain regions where AI is regulated, such as the European Union, under the EU AI Act. In this case, AI tools are classified from prohibited tools such as social scoring systems, high risk tools such as AI uses related to biometrics, employment or education, which are highly regulated, to the low risks AI tools which may include email spam filters and are the least regulated.

A risk-based assessment may be adopted by an organization to determine risk appetites and risk mitigations to effectively and responsibly utilize AI tools. It may not be necessary for an organization to carry out risk assessments for an AI application that may be classified as a low-risk system, although particular attention must be paid to higher risk use cases within business functions.

AI Literacy Trainings 

AI literacy trainings for organizations, their staff and stakeholders assist in developing awareness of AI capabilities, challenges and ethical implications. As such, it is essential to create learning programs, beyond technical understanding, that seek to demystify AI tools, while providing practical exampls of their use cases in business functions. Where certain sector-specific regulations are applicable to the use of AI systems, the training must cover these aspects to ensure compliance is maintained. This will allow the employees across all levels of the organization to understand AI systems and utilise them responsibly.

Conclusion

As the AI landscape continues evolving, governance in the development, deployment and adoption of AI tools is a way to strategically mitigate risks, build trust and promote responsible use of AI technologies. The launch of the Kenya AI Strategy signals a move to responsible AI development, and as potential policy frameworks are slated for development, organizations that proactively establish governance frameworks will be well positioned to leverage emerging technologies.

While specific AI compliance legislations are currently lacking, organizations can expect more comprehensive regulatory frameworks with time, even as current data protection or sector specific regulations adapt to better apply to emerging technologies. 

Leave a Reply

Barua-pepe haitachapishwa. Fildi za lazima zimetiwa alama ya *

swSwahili