Artificial intelligence has the potential to be a powerful equalizing force, helping people, institutions, and nations make strides forward in ways and at speeds previously unimaginable. We are already seeing AI used to make essential government services more accessible, witnessing major breakthroughs in disease detection and prevention, and leveraging AI to identify and prevent the effects of natural disasters and climate change. All this is happening in the early days of AI being more widely available, especially generative AI.
Yet, as we have witnessed with other major technological breakthroughs, just because AI holds potential to address some of the world’s greatest challenges, that outcome is by no means assured. We know who has a voice in developing and advancing the technology matters, and we know that cultural context and respect for local values are critically important as we seek to deploy AI systems that harness the best of people and AI. If we hope to realize the potential of AI, we must be deliberate in our efforts to bring more voices and perspectives into the dialogue around AI policy and governance, and we must do that now, at the beginning.
Last year, Microsoft partnered with the Stimson Center to bring a greater diversity of voices to the conversation on responsible AI. The Global Perspectives Responsible AI Fellowship brings together diverse stakeholders from civil society, academia, and the private sector for substantive discussions on AI, its impact on society, and ways that we can all better incorporate the nuanced social, economic, political, and environmental contexts in which these systems are deployed.
This is vital work. Earlier in 2023, according to UN estimates, the global population exceeded 8 billion people, the vast majority of whom live in the Global South. We’re all encountering a major technological transformation, and we must make it a wave of transformation that benefits the Global Majority. I’m encouraged by the prospect that thoughtful, context-aware uses of AI might help build more vibrant communities around the world and empower nations to leapfrog traditional steps in conventional models for economic development.
By working to understand and incorporate the nuanced social, economic, political, and environmental contexts in which AI systems are deployed, developers can be champions for inclusivity and ensure the benefits of these systems are shared widely. This is particularly important for foundation models, where there is an opportunity for global perspectives to inform and ensure the utility of these systems as they are being developed.
To bring together the necessary perspectives to understand and address these issues, we conducted a comprehensive global search for fellows. We landed with a dynamic class of fellows from a range of countries—Chile, Tajikistan, India, the Dominican Republic, Nigeria, Kenya, Indonesia, Serbia, Mexico, Rwanda, Sri Lanka, Egypt, Turkey, and Kyrgyzstan. During the program, fellows exchanged ideas that can help promote AI efforts that are inclusive and consider the unique challenges and opportunities faced by the Global South. The stories of these fellows, many of which we will share in this series, illustrate the many ways in which AI is having a transformative impact around the world.
These ideas can help promote AI efforts that are globally inclusive while considering the unique challenges faced by the Global South.
The absence of a regulatory framework for AI has made integration somewhat challenging for Kyrgyz Republic, but lessons from past tech importation can help guide accountability.
Nigeria has adopted laws and policies to help keep its infrastructure safe while importing foreign tech, emphasizing the need for new amendments when importing AI.
In this three-part series, we will present the issues at hand, put forward ideas to harness the benefits of AI applications and mitigate their risks. We will also share key insights about the responsible development and use of AI in the Global South.
For Microsoft, I am most excited about learning from these global insights and applying them to our efforts to advance AI responsibly. In 2023, we worked with the fellows to inform our approach to developing the AI Blueprint for India and to showcase responsible AI case studies which we hope will contribute constructively to broader discussions around AI policy.
This is only the beginning, and I am eager to further elevate the work and perspectives of the fellows. I am grateful to my colleagues Hiwot Tesfaye and Max Scott from the Office of Responsible AI for their leadership in making this fellowship a reality and their commitment to bringing a greater diversity of perspectives to the daily practice of responsible AI.
Lessons from past tech importation can help guide responsible AI
Advanced economies are distinguished from emerging economies by the absence of historical technological legacy and capacity to innovate in uncharted domains.
In the context of Kyrgyzstan, we are witnessing an interesting blend of innovation and resourcefulness. Instagram and Telegram, which were originally designed for different purposes, have transformed into thriving hubs for e-commerce, evolving into online marketplaces for trading cattle in rural areas, classifieds for used cars, and job boards for low skilled vacancies. The unconventional use of these digital platforms has exemplified the adaptability of technology in the face of local needs.
At present, the Kyrgyz Republic has basic data privacy legislation, yet we lack the regulatory capacity to enforce it within the digital sphere. The absence of a regulatory framework for artificial intelligence compounds this enforcement challenge. However, the government is actively using technologies such as ChatGPT for writing and proofreading public speeches delivered by the Prime Minister of the Kyrgyz Republic. In another case, members of parliament have requested the Government to block AI-powered digital platform TikTok because of inappropriate content or demanded an investigation on Yandex Go algorithmic transparency to provide fair commuting rates.
Meanwhile, the local community is in the nascent stage of developing a national voice recognition system that will help local users to communicate with existing IT solutions in Kyrgyz languages. In other domains, such as wedding photography, individuals are now venturing into the realm of AI-generated love stories, crafting fictional narratives to accompany cherished memories.
These examples underscore the exploration of AI’s potential within Kyrgyz society, even in the absence of a clear roadmap and regulatory guidelines for responsible use. In this regard, there exist historical lessons from importing advanced technologies that can guide Kyrgyz society in harnessing the benefits of AI in an equitable and responsible way.
Ethical guidelines are crucial to responsible AI deployment.
Multistakeholder engagement is a cornerstone for successful technology deployment. For example, the introduction of Estonian X-Road data interoperability platform, thanks to USAID’s support, faced initial inertia. However, through consistent engagement by civil society with policymakers, Kyrgyzstan succeeded in showcasing the benefits of this imported IT solution. “X-Road” locally known as “Tunduk” proved instrumental in combating corruption, reducing bureaucracy, increasing tax revenues and enhancing overall efficiency, setting the stage for digital transformation. Notably, as the COVID-19 pandemic struck, the government’s adept use of X-Road showcased the nation’s newfound digital resilience, reinforcing the importance of timely implementation.
Nowadays, the Cabinet invests heavily on information and communication technologies. Building on the success of X-Road, Kyrgyzstan can expand its deployment of AI solutions in public services. This could include AI-driven administrative processes, healthcare diagnostics in rural areas, or agriculture related solutions for farmers. These initiatives can not only enhance efficiency but also improve the quality of public services for citizens.
Community Engagement: A decade ago, the local community came together to improve Kyrgyz language support in Google Translate. The outcome was a more consistent and user-friendly translation experience, showcasing the potential for AI imports to be adapted to local needs when nurtured by collective action. This example underscores the power of community-driven initiatives. Encouraging active participation and collaboration between community members, technology enthusiasts, and developers can lead to the development of AI solutions that are better adapted to local needs and linguistic diversity.
Strategic partnerships and civil society: International development organizations have played a pivotal role in Kyrgyzstan’s technological evolution. The establishment of the first internet exchange point in 2002, facilitated by Soros Foundation, revolutionized local connectivity in a time when cross-border connectivity was notably expensive. Local internet service providers offered lightning-fast 100Mbps connections almost for free, creating an ecosystem ripe for innovation. This conducive environment nurtured local talents, paving the way for the emergence of competitive startups across Central Asian countries.
Ethical Frameworks and Regulation: Developing ethical guidelines and regulations is crucial to ensure that AI deployments align with local values and laws. The case of TikTok and Yandex Go highlights the need for clear policies regarding content filtering and data privacy. Collaborative efforts between government, civil society, and international organizations can lead to the creation of balanced and effective regulations.
Aziz Soltobaev is an expert in promoting tech entrepreneurship and fostering the digital economy, with a focus on building country-level capacity. He has served as a policy adviser to various government offices and contributed to the development of national strategies, including the “TazaKoom” National Digital Transformation Strategy and the “Vision 2018-2040” National Sustainable Development Strategy.
Latin America’s rise in AI
Artificial Intelligence (AI) is unquestionably the most transformative technology in human history, often heralded as the “biggest commercial opportunity in the world economy.”
The rise of AI has triggered a global technological race, where the nations that lead in AI development will define the rules of the world order in the coming decades. Latin America, however, faces a significant challenge as many Latin American countries find themselves primarily importing and consuming AI technology rather than actively developing it. This essay delves into the consequences of Latin America’s dependence on imported AI, the need to become developers of technology, and GENIA’s LATAM 4.0 Project that aims to shift the region’s trajectory towards becoming an AI superpower.
When Latin America relies on AI systems developed in advanced economies, it exposes itself to significant risks. AI technologies, if not adapted to the nuanced social, economic, and environmental contexts of the region, may not work effectively or could result in AI-related harms.
To leapfrog into a new stage of development, Latin America must transition from being mere consumers to active developers of AI technology. This shift would empower the region to create AI systems that align with its specific needs and priorities, thus reducing the risks associated with imported AI. Developing AI locally can stimulate innovation, drive economic growth, and enhance the region’s global technological standing.
Latin America must transition into being active developers of AI technology.
Recognizing the need for Latin America to take an active role in AI development, GENIA has launched the LATAM 4.0 Project. This multi-stakeholder coalition brings together private sector entities, government bodies, academic institutions, and civil society organizations to implement the first-ever Regional AI Strategy in the Western Hemisphere. The project aims to harness the region’s diverse data resources and integrate marginalized perspectives into AI systems. By doing so, Latin America can bolster its research, development, and deployment efforts, ultimately elevating its global technological leadership.
GENIA’s dedication to regional advancement is evident in its partnership with the Presidency of the Dominican Republic, which seeks to extend the impact of LATAM 4.0 across Latin America and implement a common strategy in the region. The organization is actively working with several governments to expand the project, with the vision of integrating all of Latin America’s democracies into a unified regional AI ecosystem.
However, Latin America cannot achieve technological independence in isolation. To this end, GENIA is working to partner with advanced economies to establish cross-regional cooperation on AI. One notable effort is GENIA’s support for Congressional Resolution H.Res.649, which calls on the United States to “champion a regional artificial intelligence strategy in the Americas.”
AI’s significance in shaping the future cannot be overstated. Latin America’s current role as an AI consumer rather than a developer poses a substantial risk to the region’s economic and social well-being. To address this challenge, initiatives like the LATAM 4.0 Project are pioneering efforts to empower Latin America to actively participate in AI development, aligning AI systems with the region’s unique needs and challenges. By promoting cross-regional cooperation, Latin America can position itself as a significant player in the global AI landscape and influence the global governance of artificial intelligence.
Jean García Periche
Jean García Periche is the co-founder and President of GENIA Latinoamérica, a research and development (R&D) regional platform with the mission of including Latin America into the global development of Artificial Intelligence. Through GENIA, Jean is leading the LATAM 4.0 Coalition, which integrates businesses, startups, universities, NGOs, and governments to implement a Regional AI Strategy in Latin America.
Responsible AI will help enable successful tech importation
Like many countries in the global south, Nigeria is lagging behind in the development of cutting-edge AI and other emerging technologies.
The country has adopted laws and policies to transform the importation of foreign technology into opportunities for building local capacity for innovation.
The National Office for Technology Acquisition and Promotion Act provides a regulatory framework surrounding the importation of technology unavailable in Nigeria or foreign technology, ensuring that it is brought into the country under terms favorable to Nigerian entities and that Nigerians build the capacity with time.
The Revised Guidelines for Registration and Monitoring of Technology Transfer Agreements (TTAs) in Nigeria 2020 expects that TTAs incorporate research activities which will be carried out in-house and in collaboration with any research institution in Nigeria. Following registration of the TTA, NOTAP officers will pay monitoring visits to the companies to ensure that Nigerian personnel are absorbing the technology in compliance with the domestication plan.
Responsible AI necessitates new regulation.
While current Nigerian legislation is relevant for AI technology, its not broad enough to capture the contemporary issues posed when importing AI into the country. Importing countries may inadvertently be importing their values when providing technology to the detriment of thenational interest and the well-being of the local population. This may be the case where the parameters for development and the underlying datasets are not representative ofthe importing country. It also might be where importing countries have failed to articulate legally binding policies or regulations that govern how they will use AI technologies.
Nigeria’s intelligence and counter-intelligence agencies have increasingly invested in surveillance technologies. These are mostly shrouded in secrecy, but include some features that are AI-based. For context, Nigeria is ranked the 8th most terrorized country in the world. This demands that measures be put in place to help security agencies proactively tackle incidences of terrorism.
One of the many surveillance tools acquired by intelligence agencies was a Global Service Map (GSM) passive off-the-air interception system, which can be used to covertly collect cellular traffic in an area and analyze it to identify suspicious communication patterns using speech recognition, link analysis, and text matching. This is just one example of several privacy-infringing technologies that are deployed in Nigeria and are largely exempt from oversight. Local laws for importing technology do not envisage the unique risks posed by AI. Similarly, Nigeria’s data protection laws extend a wide range of exemptions when data processing is for national security purposes.
There is a need to enact amendments to laws regulating technology importation, taking into consideration the peculiarities of emerging technologies, representation in datasets, and mandating impact assessments, especially for technologies acquired by state actors.
Akintunde Agunbiade is an Associate at AELEX, a leading law firm in Nigeria and Ghana, with his work primarily revolving around the Technology, Media, and Telecommunications Practice Group. Here, he advises clients in the creative space, startups, and tech companies seeking to incorporate AI into their products. He is also an AI Ethics & Governance Researcher and the author of the book Artificial Intelligence & Law: A Nigerian Perspective (2019).
Responsible AI for the next generation
MBA programs can set new standards of business practices for the next generation of responsible AI entrepreneurs by going beyond metrics of economic success and promoting humanity as a superpower.
The entrepreneurial ecosystem across sectors is boasting new products and services that have integrated Artificial Intelligence (AI) systems as a core element of their value proposition. In Latin America, the number of AI startups doubled from 2018-2020, with a market size of about $4.200 million, an estimate of 38,000 people employed by them and with a surprisingly young median age of 29 years old of its founders. Globally, young aspiring entrepreneurs’ interest in MBA programs that offer an AI-related advantage has also peaked, as AI took the top spot for content that prospective students want to see in their degree programs, according to CarringtonCrips’s Tomorrows’s MBA report.
In the realm of formal education, MBA programs can shape and set new standards of business practices, including the training of the next generation of Responsible AI Entrepreneurs that anchor their innovative business ideas in the respect of human dignity, considering risk-based approaches to their often-disruptive business models. Below I outline five recommendations for Deans of Business Schools, and for anyone embarking in the journey to ideate, develop, implement, and follow-up through the life cycle of AI algorithms through entrepreneurship:
Approaches to AI should go beyond metrics of success.
Program Curricula. Any approach to Artificial Intelligence can and should go beyond metrics of economic success in relation to efficiencies and increased margins, to stress on the importance of interdisciplinary research and collaborations, as well as notions of effectiveness, accountability, liability, and social impact, beyond and economic one. The good news is that there are resources available for Business Schools as well as for entrepreneurs to start their responsible AI journey with the right foot. Examples include: UNESCO’s Recommendation on the Ethics of AI as a point of departure to teach the subject, IBD Lab’s AI Ethics Self-Assessment for Actors of the Entrepreneurial Ecosystem, C Minds and Meta’s AI Ethics Booklet for Entrepreneurs, WEF Responsible Use of Tech series of Case Studies, as well as IDB’s MOOC How to use AI responsibly, among others.
“Hard” & “Core” skills. While some entrepreneurs can relate themselves as having a business acumen and not necessarily a technical understanding of AI, there is value in teaching leadership skills in technology, at the same order of importance that is placed on acquiring technical skills. There is an opportunity to highlight the role of social scientists to influence the responsible AI ecosystem and to revendicate humanities as a superpower. In an era where trust is eroding, newer generations (Gen Z in particular) are looking for products and services that are aligned to their values, that are congruent on their business practices and that have an impact on society. Both aspiring entrepreneurs and business schools should keep this in mind.
Female Entrepreneurs. I opened with a paragraph on the exciting exponential growth of the AI ecosystem in Latin America; I did it on purpose to get to this point, because the same study highlights that 92% of these companies were founded by male entrepreneurs, and of their total of employees, only 15% was female. This is staggering but can and should be addressed. MBAs have a role to play in closing the gender gap in tech entrepreneurship, by intentionally and strategically creating incentives for women to join their Schools, as well as by developing Mentorship Programs and Diversity, Equity, and Inclusion (DEI) initiatives, among other ideas.
The GovTech Opportunity. As a service provision scheme between governments and entrepreneurs to solve for internal digital challenges of public administrations, there is a business case for GovTech as growing market of more than $400,000,000 USD as well as a force for good to advance SDGs. To seize this opportunity, Business Schools can team with Policy Schools to better understand the nuances and challenges of working with government officials, to teach entrepreneurs how to navigate these relationships, as well as to approach project management with more empathy on both sides.
Lead by Example. By this I mean to use the AI tools at your disposal. I recently attended AMBA & BGA Latin America Deans and Directors Conference and noticed that a common question among Deans had to do with dealing with Large Language Models (LLMs a.k.a ChatGPT) in educational settings. My recommendation to them was to be open to -carefully and critically- experiment with them, leveraging guides like UNESCO’s ChatGPT and Artificial Intelligence in Higher Education as well as OpenAI prompting resources to Teaching with AI.
The key message that I want to convey for Education leaders in the Business sector, is that Silicon Valley, once revered culture of Move Fast and Break Things, has long been over; the idea of a Minimum Viable Product is being replaced by that of a Minimum Virtuous Product. Today, there is a wave of new voices, organizations and initiatives like the Global Perspectives: Responsible AI Fellowship from the Stimson Center and Microsoft that puts the spotlight on Global Majority practitioners that are challenging the status quo and starting debates and conversations from novel angles to approach AI responsibly.
Cristina Martinez Pinto
Cristina Martínez Pinto is the Founder and CEO of the PIT Policy Lab. As a tech policy entrepreneur, she works to advance people-centered technological development and technology governance in Latin America. She has worked as a Digital Development Consultant at the World Bank, led C Minds’ AI for Good Lab, and co-founded Mexico’s National AI Coalition IA2030Mx.
Harnessing AI for peace
The transformative potential of AI to advance world peace must be considered by peacebuilding organizations when planning preventive efforts.
In today’s world, peace is unraveling across many regions — from the escalating conflict in Gaza to the prolonged turmoil in Ukraine and Sudan, the recurring violence in Myanmar, and the surge in gang violence in Ecuador. These conflicts, along with many less visible ones, are further exacerbated by global pandemics, natural disasters, economic downturns, humanitarian emergencies, migration, and the looming threat of climate change. These pressing challenges underscore the imperative to advance the peace agenda.
Exploring the untapped potential of AI for peace – examples to chart the course.
While applications of AI in warfare have garnered substantial attention, with nations investing heavily in autonomous weapons systems, surveillance, and cyber capabilities, the transformative potential of AI in advancing peace remains largely overlooked. AI can serve as a powerful tool in data analysis, offering new avenues for conflict prevention, peace negations, mediation, and human rights protection. Peacebuilders are now exploring innovative strategies, recognizing the potential of AI to navigate the complexities of peacebuilding work.
AI can reveal insights for peacebuilding organizations.
A promising application of AI in the peacebuilding field involves leveraging data to evaluate conflict early-warning and early-action systems. Predictive analytics solutions, like the Violence & Impacts Early-Warning System (VIEWS) led by Uppsala University and the Peace Research Institute Oslo, analyze extensive datasets to identify early signs of conflicts, providing timely alerts for preventive measures. In instances like the Hala System in Syria, AI-driven early warnings prove lifesaving. Hala’s system enabled rescue teams to prepare as warplanes approached, mitigating harm during airstrikes.
Likewise, AI can be used in monitoring online hate speech and predicting the likelihood of conflict escalation, revealing valuable insights for peacebuilding organizations to plan preventive efforts. AI can also be beneficial for the work of human rights defenders and activists, in collecting evidence of war crimes. For example, VFRAME collaborates with the Yemeni Archive, to minimize processing times, enhance analysis and identify the use of illegal munitions in war zones. Utilizing computer vision technologies, VFRAME develops and deploys neural networks trained on synthetic data to analyze conflict zone media.
AI can also be helpful in streamlining peace negotiations by analyzing extensive data, identifying patterns, and offering valuable insights to enhance decision-making, contributing to more effective conflict resolution. In Libya, AI-facilitated digital dialogues engaged 1,000 citizens, leading to a consensus and establishing an interim government within four months. Moreover, AI-driven sentiment analysis aids in understanding conflict narratives, allowing practitioners to work towards de-escalation and altering negative narratives.
Paving the path to ethical AI for peace applications
As we navigate the intricate intersection of AI and peace, ethical considerations become paramount. The multifaceted ethical complexities surrounding AI’s role in peace—ranging from biases and privacy issues to transparency and accountability—demand our steadfast attention. At “AI for Peace” we are embedding ethics in the design of AI interventions as a proactive approach to addressing the unintended consequences at the convergence of data and peace. By integrating principles such as “do no harm” and “conflict sensitivity” into the realm of algorithms, we safeguard against potential harms and uphold ethical standards.
By giving priority to participatory and inclusive AI, we involve local communities, stakeholders, and experts in the design, development, and deployment of AI technologies for peace. While AI may not eliminate violent conflict entirely (ultimately only humans can), it undeniably emerges as a valuable tool in the peacebuilder’s toolkit, contributing significantly to the establishment of lasting peace, with ethical considerations serving as the compass guiding our path forward
Branka Panic is the AI for Peace Founding Director, a political scientist, and an expert in international security, international development policy, and peacebuilding. She is a CIC Non-Resident Fellow focusing on researching the utilization of data-driven approaches to peacebuilding and prevention, conflict early warning/early action, and designing the pathways to establishing a Peacebuilding Data Hub.
AI for all: bridging the inclusivity gap
Artificial Intelligence (AI) has rapidly evolved in recent years, promising transformative changes across industries and societies. However, realising its full potential requires addressing two salient challenges: translating complex AI concepts and navigating culturally sensitive or taboo topics concerning bias and fairness. Endeavours like the ones from the Swahilipot Hub Foundation are making a difference and moving the community closer to achieving these goals.
AI has long been perceived as a realm shrouded in technical jargon and complexity, alienating those without a background in computer science. Bridging this knowledge gap is crucial to ensuring that AI is not reserved for the technologically elite, but is accessible to all. To this end, educational initiatives are vital. The Swahilipot Hub Foundation is a Mombasa-based NGO aimed at empowering youth in the Technology, Creative and Heritage sectors to grow careers and enhance the economic stability of people living in the coastal region of Kenya. They’re cultivating a data-centric culture by training over 900 youth in Mombasa, equipping them with essential skills in data collection, analysis, and presentation. More significantly, they’ve emphasized data privacy and policies, instilling in these young individuals the importance of ethical data practices. These efforts empower the youth to engage in discussions about data confidently and responsibly, breaking down the barriers of complexity.
Swahilipot Hub makes AI accessible for Mombasa youth
Beyond these educational efforts, Swahilipot Hub Foundation exemplifies data-driven decision-making. Research conducted in 2018 by Global Opportunity Youth Network, a multi-stakeholder initiative committed to creating place-based systems shifts for youth economic opportunity, shows that there are 562k youth in Mombasa, 44% of them are unemployed and 66% are estimated to be “Opportunity Youth” – young people aged 15-35 years who are out of school, unemployed, or working in informal jobs. These numbers have grown even further post the COVID-19 pandemic. With this in mind, Swahilipot Hub has since compiled a growing database of over 18,000 youth in Mombasa, meticulously assessing their skills, education levels and interests. This database has allowed them to link these young individuals with opportunities such as upskilling training, jobs, and scholarships. What’s truly remarkable is their transition to automation with the “Fursa” platform, driven by machine learning algorithms. This automated system streamlines the process of matching youth with opportunities, not only showcasing the practical benefits of AI but also providing a scalable solution for inclusivity.
Another formidable challenge in the realm of AI is navigating culturally sensitive and taboo topics, particularly those related to bias and fairness. AI systems that perpetuate biases or unfairly discriminate can have detrimental consequences for society. Swahilipot Hub Foundation’s collaboration with the Mozilla Foundation offers an enlightening example of community involvement to address this issue. Together, they have organized contribute-a-thons and Hackathons for the Common Voice project, where community members actively contribute Swahili sentences and validate them through text and speech. This inclusive approach ensures that AI development is culturally sensitive and respectful, reflecting the diverse voices and perspectives of the community. It not only fosters inclusivity but also demonstrates that AI can be a tool for representing and celebrating culture, not erasing it. Furthermore, through technology conferences like Pwani Innovation Week, Swahilipot Hub has brought conversations around the fourth Industrial revolution, AI, and the future of work to the Mombasa community, further emphasizing the importance of addressing these AI-related issues in a culturally sensitive and community-centric manner.
The Jitume Program, spearheaded by state-owned Konza Technopolis, empowers Kenyan youth with digital skills and job opportunities through 117 Jitume Centers equipped with computers and internet access, offering 16+ training programs in collaboration with renowned institutions like Thunderbird School of Global Management and Arizona State University. Their primary goal is to combat Kenya’s high unemployment rate (10.4% as of 2020). Swahilipot Hub Foundation’s partnership with Konza Technopolis reflects their dedication to AI education, introducing a “Jitume” center at Swahilipot Hub to provide Mombasa’s youth with accessible AI education. This initiative will continue to equip youth with the knowledge to engage in AI discussions and contribute to solving Africa’s challenges, bridging the gap between theory and practical applications while demystifying AI’s potential impact on careers and society.
In conclusion, Swahilipot Hub Foundation’s multifaceted approach to addressing the complexities of AI and ensuring inclusivity sets a remarkable example for organizations worldwide. Their educational initiatives empower youth to engage in AI discussions, while community-driven projects guarantee that AI reflects cultural diversity. Automation, government collaboration, and practical training opportunities further underscore their dedication to making AI accessible and beneficial to all. “AI for All: Bridging the Gap Between Complexity and Inclusivity” isn’t just a topic; it’s a vision realized by organizations like Swahilipot Hub Foundation, demonstrating that responsible AI practices are attainable, and the power of AI can indeed be harnessed by all.
Ziri Issa is an accomplished professional and visionary leader in the field of technology and innovation. With a BA in Information Technology from Maseno University, Ziri currently holds the position of Head of Technology and Innovation at Swahilipot Hub Foundation, where he is dedicated to fostering a culture of innovation in Mombasa, Kenya.