Developing an AI Governance Program: Lessons from DIFA’s Executive Breakfast Meeting

On 28 August, DIFA Consultancy hosted an interactive Executive Breakfast Meeting on “Developing an AI Governance Program”, bringing together industry leaders, legal minds, and technology experts for a practical discussion on how organizations can responsibly develop and adopt artificial intelligence (AI).

The session emphasized that AI governance is not an optional or theoretical exercise but a necessity in today’s environment. With increasing AI adoption in sectors such as finance, healthcare, telecommunications, and beyond, organizations should establish frameworks ensuring AI is fair, transparent, and aligned with business objectives. 

Key Learnings for Organizations

The conversation highlighted the core steps in developing governance frameworks, particularly the importance of embedding responsible AI principles into every stage of development and deployment. Participants explored real-world applications of AI adoption across industries, the legal implications of its use, and the emerging jurisprudence shaping accountability.

Case Study: Safaricom’s AI Governance Journey

A highlight of the session was a presentation by Safaricom PLC, sharing their practical experience in building AI governance structures. Their learnings underscored several pillars of responsible adoption:

  • Leadership and commitment: AI policies must be anchored at the top, with leadership ensuring the democratization and scaling of responsible use across the enterprise.
  • Risk management: Organizations should proactively identify and mitigate AI-specific risks such as bias, transparency, and data security. This is done through categorization of AI use cases through a risk-based approach, with differentiated approval processes depending on risk level.
  • Operational controls: Fairness, data quality, and safe deployment should be ensured through robust oversight.
  • Performance evaluation: Continuous monitoring and compliance checks are necessary to uphold set standards.
  • Continuous improvement: Using the Plan-Do-Check-Act (PDCA) cycle allows organizations to evolve AI governance processes in tandem with technological and regulatory advances.

Trust and Data Governance as the Foundation

DIFA Consultancy emphasized that trust is the foundation of AI adoption. Effective governance requires continuous identification and assessment of risks, as well as proactive mitigation which remains an ongoing process.

Data governance emerged as a cornerstone: ensuring that data is high-quality, relevant, secure, and responsibly managed across its lifecycle, is crucial to AI governance. While data governance focuses on the integrity of information, AI governance extends to models and systems by addressing fairness, transparency, ethics, and their broader societal impact. As such, it is an essential basis to ensure your organization has robust data governance practices in place, even as it considers developing an AI governance program.

Strategic and Policy-Level Considerations

Effective AI governance strategies require board-level engagement. Governance structures will only be effective if leadership appreciates the basics of AI and data governance to guide and inform strategic decision-making.

Participants noted that Kenya has a unique opportunity to position itself as a leader in AI governance. With the international regulations setting global standards and benchmarks, Kenyan organizations can adopt best practices early and position themselves as regional leaders in responsible AI.

Sectoral insights further enriched the conversation. In banking, it emerged that most AI applications remain predictive, indicating untapped opportunities in other AI use cases, as well as emerging risks that require foresight and regulatory preparedness within the sector.

The Way Forward

As the discussion concluded, the consensus was on the need for decisive action in developing effective AI governance frameworks. Organizations should prioritize:

  • embedding data privacy and governance at the core of AI strategy formulation;
  • applying responsible AI principles throughout development and deployment; and
  • integrating AI governance into broader business processes, ensuring accountability from boardrooms to operational teams.

By taking these steps, businesses can ensure that AI not only drives innovation and efficiency but also aligns with ethical standards, regulatory expectations, and societal trust.

At DIFA, we partner with businesses to design and implement practical and effective governance structures, risk frameworks, and compliance strategies that make AI adoption both safe and strategic. 

To explore how we can assist your organization, please get in touch with our team.

en_USEnglish