Artificial intelligence has the potential to be a powerful equalizing force, helping people, institutions, and nations make strides forward in ways and at speeds previously unimaginable. We are already seeing AI used to make essential government services more accessible, witnessing major breakthroughs in disease detection and prevention, and leveraging AI to identify and prevent the effects of natural disasters and climate change. All this is happening in the early days of AI being more widely available, especially generative AI.
Yet, as we have witnessed with other major technological breakthroughs, just because AI holds potential to address some of the world’s greatest challenges, that outcome is by no means assured. We know who has a voice in developing and advancing the technology matters, and we know that cultural context and respect for local values are critically important as we seek to deploy AI systems that harness the best of people and AI. If we hope to realize the potential of AI, we must be deliberate in our efforts to bring more voices and perspectives into the dialogue around AI policy and governance, and we must do that now, at the beginning.
Last year, Microsoft partnered with the Stimson Center to bring a greater diversity of voices to the conversation on responsible AI. The Global Perspectives Responsible AI Fellowship brings together diverse stakeholders from civil society, academia, and the private sector for substantive discussions on AI, its impact on society, and ways that we can all better incorporate the nuanced social, economic, political, and environmental contexts in which these systems are deployed.
These ideas can help promote AI efforts that are globally inclusive while considering the unique challenges faced by the Global South.
This is vital work. Earlier in 2023, according to UN estimates, the global population exceeded 8 billion people, the vast majority of whom live in the Global South. We’re all encountering a major technological transformation, and we must make it a wave of transformation that benefits the Global Majority. I’m encouraged by the prospect that thoughtful, context-aware uses of AI might help build more vibrant communities around the world and empower nations to leapfrog traditional steps in conventional models for economic development.
By working to understand and incorporate the nuanced social, economic, political, and environmental contexts in which AI systems are deployed, developers can be champions for inclusivity and ensure the benefits of these systems are shared widely. This is particularly important for foundation models, where there is an opportunity for global perspectives to inform and ensure the utility of these systems as they are being developed.
To bring together the necessary perspectives to understand and address these issues, we conducted a comprehensive global search for fellows. We landed with a dynamic class of fellows from a range of countries—Chile, Tajikistan, India, the Dominican Republic, Nigeria, Kenya, Indonesia, Serbia, Mexico, Rwanda, Sri Lanka, Egypt, Turkey, and Kyrgyzstan. During the program, fellows exchanged ideas that can help promote AI efforts that are inclusive and consider the unique challenges and opportunities faced by the Global South. The stories of these fellows, many of which we will share in this series, illustrate the many ways in which AI is having a transformative impact around the world.
In this three-part series, we will present the issues at hand, put forward ideas to harness the benefits of AI applications and mitigate their risks. We will also share key insights about the responsible development and use of AI in the Global South.
For Microsoft, I am most excited about learning from these global insights and applying them to our efforts to advance AI responsibly. In 2023, we worked with the fellows to inform our approach to developing the AI Blueprint for India and to showcase responsible AI case studies which we hope will contribute constructively to broader discussions around AI policy.
Lessons from past tech importation can help guide responsible AI
The absence of a regulatory framework for AI has made integration somewhat challenging for Kyrgyz Republic, but lessons from past tech importation can help guide accountability.
Responsible AI will help enable successful tech importation
Nigeria has adopted laws and policies to help keep its infrastructure safe while importing foreign tech, emphasizing the need for new amendments when importing AI.
Latin America’s rise in AI
GENIA’S LATAM 4.0 project aims to use the region’s diverse data resources to include marginalized perspectives and integrate their democracies into a unified AI ecosystem.
Responsible AI for the next generation
MBA programs can set new standards of business practices for the next generation of responsible AI entrepreneurs by going beyond metrics of economic success.
Harnessing AI for peace
The transformative potential of responsible AI to advance world peace must be considered by peacebuilding organizations when planning preventive efforts.
AI for all: bridging the inclusivity gap
Endeavors from educational initiatives to bridge the knowledge gap by ensuring AI is accessible to all will help move communities closer to achieving fairness.
Advancing digital justice in the Global South
AI regulation has yet to reach a consensus between governments and tech giants. AI ethics should supplement, not supersede, human rights laws when considering imported AI.
Upskilling for responsible computing
As more young Africans become tech developers, the need for strengthening capacity in teaching AI and addressing the ethical implications of these technologies must be considered.
Artistic Expression and democracy
Through AI-powered tools, artists can access a variety of media, allowing them to collaborate across geographical barriers and democratize content distribution.
Paving the way for an inclusive future
There can be little doubt that we are in the midst of a new technological revolution with increasing AI capabilities shaping every aspect of our daily lives. From the medical field to transportation and sustainable development, the combination of computing power and big data is no longer just a technological tool, but a catalyst for global change. As we embrace these advancements, we also need to recognize the challenges and risks, particularly in regions like the Global South where the majority of people live and where the impact of AI is profound yet complex. Hence, the urgency to provide an open space for honest conversations and an opportunity to learn from different perspectives cannot be overstated.
The urgency to provide an open space for honest conversations and an opportunity to learn from different perspectives cannot be overstated.
The partnership between the Stimson Center and Microsoft is doing exactly that: it ignites essential conversations on the remarkable promise of AI, highlights best practices from around the globe, and creates a community centered around the desire to advance technology development responsibly. Throughout this series, we have heard how AI can be a formidable tool for improving governance, enhancing economic opportunities, and empowering vulnerable populations. These stories highlight AI’s potential to drive significant positive change, underscoring the importance of responsible and thoughtful implementation to reap its benefits globally.
Our commitment at Stimson is to not only highlight these diverse experiences, but also to work towards robust safeguards that ensure AI’s benefits are widely accessible and its deployment is safe and transparent. By fostering a collaborative environment that emphasizes cross-boundary partnerships and inclusive dialogue, we aim to pave the way for a future where AI empowers every community, enhances lives, and promotes sustainable growth worldwide. I am most grateful to my colleague Julian Mueller-Kaler, who has envisioned this partnership with Microsoft and continues to lead the effort at Stimson.
Looking ahead, the Global Perspectives Responsible AI Fellowship will continue to serve as a forward-thinking initiative aiming to shape the future of ethical AI development. It will bring together even more diverse voices from the private sector, governmental bodies, civil society, and the tech community to cultivate AI innovations that are both responsible and transformative. I am delighted with the continuation of this work stream, as well as the growing partnership between the Stimson Center and Microsoft. As we venture into the next phase, I hope that you also stay engaged. Your participation is critical for the exchange of best practices and for collaborating on developing technologies that are equitable and beneficial for all.
Lessons from past tech importation can help guide responsible AI
Advanced economies are distinguished from emerging economies by the absence of historical technological legacy and capacity to innovate in uncharted domains.
In the context of Kyrgyzstan, we are witnessing an interesting blend of innovation and resourcefulness. Instagram and Telegram, which were originally designed for different purposes, have transformed into thriving hubs for e-commerce, evolving into online marketplaces for trading cattle in rural areas, classifieds for used cars, and job boards for low skilled vacancies. The unconventional use of these digital platforms has exemplified the adaptability of technology in the face of local needs.
At present, the Kyrgyz Republic has basic data privacy legislation, yet we lack the regulatory capacity to enforce it within the digital sphere. The absence of a regulatory framework for artificial intelligence compounds this enforcement challenge. However, the government is actively using technologies such as ChatGPT for writing and proofreading public speeches delivered by the Prime Minister of the Kyrgyz Republic. In another case, members of parliament have requested the Government to block AI-powered digital platform TikTok because of inappropriate content or demanded an investigation on Yandex Go algorithmic transparency to provide fair commuting rates.
Meanwhile, the local community is in the nascent stage of developing a national voice recognition system that will help local users to communicate with existing IT solutions in Kyrgyz languages. In other domains, such as wedding photography, individuals are now venturing into the realm of AI-generated love stories, crafting fictional narratives to accompany cherished memories.
These examples underscore the exploration of AI’s potential within Kyrgyz society, even in the absence of a clear roadmap and regulatory guidelines for responsible use. In this regard, there exist historical lessons from importing advanced technologies that can guide Kyrgyz society in harnessing the benefits of AI in an equitable and responsible way.
Ethical guidelines are crucial to responsible AI deployment.
Multistakeholder engagement is a cornerstone for successful technology deployment. For example, the introduction of Estonian X-Road data interoperability platform, thanks to USAID’s support, faced initial inertia. However, through consistent engagement by civil society with policymakers, Kyrgyzstan succeeded in showcasing the benefits of this imported IT solution. “X-Road” locally known as “Tunduk” proved instrumental in combating corruption, reducing bureaucracy, increasing tax revenues and enhancing overall efficiency, setting the stage for digital transformation. Notably, as the COVID-19 pandemic struck, the government’s adept use of X-Road showcased the nation’s newfound digital resilience, reinforcing the importance of timely implementation.
Nowadays, the Cabinet invests heavily on information and communication technologies. Building on the success of X-Road, Kyrgyzstan can expand its deployment of AI solutions in public services. This could include AI-driven administrative processes, healthcare diagnostics in rural areas, or agriculture related solutions for farmers. These initiatives can not only enhance efficiency but also improve the quality of public services for citizens.
Community Engagement: A decade ago, the local community came together to improve Kyrgyz language support in Google Translate. The outcome was a more consistent and user-friendly translation experience, showcasing the potential for AI imports to be adapted to local needs when nurtured by collective action. This example underscores the power of community-driven initiatives. Encouraging active participation and collaboration between community members, technology enthusiasts, and developers can lead to the development of AI solutions that are better adapted to local needs and linguistic diversity.
Strategic partnerships and civil society: International development organizations have played a pivotal role in Kyrgyzstan’s technological evolution. The establishment of the first internet exchange point in 2002, facilitated by Soros Foundation, revolutionized local connectivity in a time when cross-border connectivity was notably expensive. Local internet service providers offered lightning-fast 100Mbps connections almost for free, creating an ecosystem ripe for innovation. This conducive environment nurtured local talents, paving the way for the emergence of competitive startups across Central Asian countries.
Ethical Frameworks and Regulation: Developing ethical guidelines and regulations is crucial to ensure that AI deployments align with local values and laws. The case of TikTok and Yandex Go highlights the need for clear policies regarding content filtering and data privacy. Collaborative efforts between government, civil society, and international organizations can lead to the creation of balanced and effective regulations.
Aziz Soltobaev
Responsible AI will help enable successful tech importation
Like many countries in the global south, Nigeria is lagging behind in the development of cutting-edge AI and other emerging technologies.
The country has adopted laws and policies to transform the importation of foreign technology into opportunities for building local capacity for innovation.
The National Office for Technology Acquisition and Promotion Act provides a regulatory framework surrounding the importation of technology unavailable in Nigeria or foreign technology, ensuring that it is brought into the country under terms favorable to Nigerian entities and that Nigerians build the capacity with time.
The Revised Guidelines for Registration and Monitoring of Technology Transfer Agreements (TTAs) in Nigeria 2020 expects that TTAs incorporate research activities which will be carried out in-house and in collaboration with any research institution in Nigeria. Following registration of the TTA, NOTAP officers will pay monitoring visits to the companies to ensure that Nigerian personnel are absorbing the technology in compliance with the domestication plan.
Responsible AI necessitates new regulation.
While current Nigerian legislation is relevant for AI technology, its not broad enough to capture the contemporary issues posed when importing AI into the country. Importing countries may inadvertently be importing their values when providing technology to the detriment of thenational interest and the well-being of the local population. This may be the case where the parameters for development and the underlying datasets are not representative ofthe importing country. It also might be where importing countries have failed to articulate legally binding policies or regulations that govern how they will use AI technologies.
Nigeria’s intelligence and counter-intelligence agencies have increasingly invested in surveillance technologies. These are mostly shrouded in secrecy, but include some features that are AI-based. For context, Nigeria is ranked the 8th most terrorized country in the world. This demands that measures be put in place to help security agencies proactively tackle incidences of terrorism.
One of the many surveillance tools acquired by intelligence agencies was a Global Service Map (GSM) passive off-the-air interception system, which can be used to covertly collect cellular traffic in an area and analyze it to identify suspicious communication patterns using speech recognition, link analysis, and text matching. This is just one example of several privacy-infringing technologies that are deployed in Nigeria and are largely exempt from oversight. Local laws for importing technology do not envisage the unique risks posed by AI. Similarly, Nigeria’s data protection laws extend a wide range of exemptions when data processing is for national security purposes.
There is a need to enact amendments to laws regulating technology importation, taking into consideration the peculiarities of emerging technologies, representation in datasets, and mandating impact assessments, especially for technologies acquired by state actors.
Akintunde Agunbiade
Akintunde Agunbiade is an Associate at AELEX, a leading law firm in Nigeria and Ghana, with his work primarily revolving around the Technology, Media, and Telecommunications Practice Group. Here, he advises clients in the creative space, startups, and tech companies seeking to incorporate AI into their products. He is also an AI Ethics & Governance Researcher and the author of the book Artificial Intelligence & Law: A Nigerian Perspective (2019).
Latin America’s rise in AI
Artificial Intelligence (AI) is unquestionably the most transformative technology in human history, often heralded as the “biggest commercial opportunity in the world economy.”
The rise of AI has triggered a global technological race, where the nations that lead in AI development will define the rules of the world order in the coming decades. Latin America, however, faces a significant challenge as many Latin American countries find themselves primarily importing and consuming AI technology rather than actively developing it. This essay delves into the consequences of Latin America’s dependence on imported AI, the need to become developers of technology, and GENIA’s LATAM 4.0 Project that aims to shift the region’s trajectory towards becoming an AI superpower.
When Latin America relies on AI systems developed in advanced economies, it exposes itself to significant risks. AI technologies, if not adapted to the nuanced social, economic, and environmental contexts of the region, may not work effectively or could result in AI-related harms.
To leapfrog into a new stage of development, Latin America must transition from being mere consumers to active developers of AI technology. This shift would empower the region to create AI systems that align with its specific needs and priorities, thus reducing the risks associated with imported AI. Developing AI locally can stimulate innovation, drive economic growth, and enhance the region’s global technological standing.
Latin America must transition into being active developers of AI technology.
Recognizing the need for Latin America to take an active role in AI development, GENIA has launched the LATAM 4.0 Project. This multi-stakeholder coalition brings together private sector entities, government bodies, academic institutions, and civil society organizations to implement the first-ever Regional AI Strategy in the Western Hemisphere. The project aims to harness the region’s diverse data resources and integrate marginalized perspectives into AI systems. By doing so, Latin America can bolster its research, development, and deployment efforts, ultimately elevating its global technological leadership.
GENIA’s dedication to regional advancement is evident in its partnership with the Presidency of the Dominican Republic, which seeks to extend the impact of LATAM 4.0 across Latin America and implement a common strategy in the region. The organization is actively working with several governments to expand the project, with the vision of integrating all of Latin America’s democracies into a unified regional AI ecosystem.
However, Latin America cannot achieve technological independence in isolation. To this end, GENIA is working to partner with advanced economies to establish cross-regional cooperation on AI. One notable effort is GENIA’s support for Congressional Resolution H.Res.649, which calls on the United States to “champion a regional artificial intelligence strategy in the Americas.”
AI’s significance in shaping the future cannot be overstated. Latin America’s current role as an AI consumer rather than a developer poses a substantial risk to the region’s economic and social well-being. To address this challenge, initiatives like the LATAM 4.0 Project are pioneering efforts to empower Latin America to actively participate in AI development, aligning AI systems with the region’s unique needs and challenges. By promoting cross-regional cooperation, Latin America can position itself as a significant player in the global AI landscape and influence the global governance of artificial intelligence.
Jean García Periche
Jean García Periche is the co-founder and President of GENIA Latinoamérica, a research and development (R&D) regional platform with the mission of including Latin America into the global development of Artificial Intelligence. Through GENIA, Jean is leading the LATAM 4.0 Coalition, which integrates businesses, startups, universities, NGOs, and governments to implement a Regional AI Strategy in Latin America.
Training the next generation of Responsible AI Entrepreneurs
MBA programs can set new standards of business practices for the next generation of responsible AI entrepreneurs by going beyond metrics of economic success and promoting humanity as a superpower.
The entrepreneurial ecosystem across sectors is boasting new products and services that have integrated Artificial Intelligence (AI) systems as a core element of their value proposition. In Latin America, the number of AI startups doubled from 2018-2020, with a market size of about $4.200 million, an estimate of 38,000 people employed by them and with a surprisingly young median age of 29 years old of its founders. Globally, young aspiring entrepreneurs’ interest in MBA programs that offer an AI-related advantage has also peaked, as AI took the top spot for content that prospective students want to see in their degree programs, according to CarringtonCrips’s Tomorrows’s MBA report.
In the realm of formal education, MBA programs can shape and set new standards of business practices, including the training of the next generation of Responsible AI Entrepreneurs that anchor their innovative business ideas in the respect of human dignity, considering risk-based approaches to their often-disruptive business models. Below I outline five recommendations for Deans of Business Schools, and for anyone embarking in the journey to ideate, develop, implement, and follow-up through the life cycle of AI algorithms through entrepreneurship:
Approaches to AI should go beyond metrics of success.
Program Curricula. Any approach to Artificial Intelligence can and should go beyond metrics of economic success in relation to efficiencies and increased margins, to stress on the importance of interdisciplinary research and collaborations, as well as notions of effectiveness, accountability, liability, and social impact, beyond and economic one. The good news is that there are resources available for Business Schools as well as for entrepreneurs to start their responsible AI journey with the right foot. Examples include: UNESCO’s Recommendation on the Ethics of AI as a point of departure to teach the subject, IBD Lab’s AI Ethics Self-Assessment for Actors of the Entrepreneurial Ecosystem, C Minds and Meta’s AI Ethics Booklet for Entrepreneurs, WEF Responsible Use of Tech series of Case Studies, as well as IDB’s MOOC How to use AI responsibly, among others.
“Hard” & “Core” skills. While some entrepreneurs can relate themselves as having a business acumen and not necessarily a technical understanding of AI, there is value in teaching leadership skills in technology, at the same order of importance that is placed on acquiring technical skills. There is an opportunity to highlight the role of social scientists to influence the responsible AI ecosystem and to revendicate humanities as a superpower, just like Virginia Tech Institute for Leadership in Technology is doing by offering the first executive degree in the Humanities. In an era where trust is eroding, newer generations (Gen Z in particular) are looking for products and services that are aligned to their values, that are congruent on their business practices and that have an impact on society. Both aspiring entrepreneurs and business schools should keep this in mind.
Female Entrepreneurs. I opened with a paragraph on the exciting exponential growth of the AI ecosystem in Latin America; I did it on purpose to get to this point, because the same study highlights that 92% of these companies were founded by male entrepreneurs, and of their total of employees, only 15% was female. This is staggering but can and should be addressed. MBAs have a role to play in closing the gender gap in tech entrepreneurship, by intentionally and strategically creating incentives for women to join their Schools, as well as by developing Mentorship Programs and Diversity, Equity, and Inclusion (DEI) initiatives, among other ideas.
The GovTech Opportunity. As a service provision scheme between governments and entrepreneurs to solve for internal digital challenges of public administrations, there is a business case for GovTech as growing market of more than $400,000,000 USD as well as a force for good to advance SDGs. To seize this opportunity, Business Schools can team with Policy Schools to better understand the nuances and challenges of working with government officials, to teach entrepreneurs how to navigate these relationships, as well as to approach project management with more empathy on both sides.
Lead by Example. By this I mean to use the AI tools at your disposal. I recently attended AMBA & BGA Latin America Deans and Directors Conference and noticed that a common question among Deans had to do with dealing with Large Language Models (LLMs a.k.a ChatGPT) in educational settings. My recommendation to them was to be open to -carefully and critically- experiment with them, leveraging guides like UNESCO’s ChatGPT and Artificial Intelligence in Higher Education as well as OpenAI prompting resources to Teaching with AI.
The key message that I want to convey for Education leaders in the Business sector, is that Silicon Valley, once revered culture of Move Fast and Break Things, has long been over; the idea of a Minimum Viable Product is being replaced by that of a Minimum Virtuous Product. Today, there is a wave of new voices, organizations and initiatives like the Global Perspectives: Responsible AI Fellowship from the Stimson Center and Microsoft that puts the spotlight on Global Majority practitioners that are challenging the status quo and starting debates and conversations from novel angles to approach AI responsibly.
Cristina Martinez Pinto
Cristina Martínez Pinto is the Founder and CEO of the PIT Policy Lab. As a tech policy entrepreneur, she works to advance people-centered technological development and technology governance in Latin America. She has worked as a Digital Development Consultant at the World Bank, led C Minds’ AI for Good Lab, and co-founded Mexico’s National AI Coalition IA2030Mx.
Harnessing AI for peace
The transformative potential of responsible AI to advance world peace must be considered by peacebuilding organizations when planning preventive efforts.
In today’s world, peace is unraveling across many regions — from the escalating conflict in Gaza to the prolonged turmoil in Ukraine and Sudan, the recurring violence in Myanmar, and the surge in gang violence in Ecuador. These conflicts, along with many less visible ones, are further exacerbated by global pandemics, natural disasters, economic downturns, humanitarian emergencies, migration, and the looming threat of climate change. These pressing challenges underscore the imperative to advance the peace agenda.
AI can reveal insights for peacebuilding organizations.
Exploring the untapped potential of AI for peace – examples to chart the course
While applications of AI in warfare have garnered substantial attention, with nations investing heavily in autonomous weapons systems, surveillance, and cyber capabilities, the transformative potential of AI in advancing peace remains largely overlooked. AI can serve as a powerful tool in data analysis, offering new avenues for conflict prevention, peace negations, mediation, and human rights protection. Peacebuilders are now exploring innovative strategies, recognizing the potential of AI to navigate the complexities of peacebuilding work.
A promising application of AI in the peacebuilding field involves leveraging data to evaluate conflict early-warning and early-action systems. Predictive analytics solutions, like the Violence & Impacts Early-Warning System (VIEWS) led by Uppsala University and the Peace Research Institute Oslo, analyze extensive datasets to identify early signs of conflicts, providing timely alerts for preventive measures. In instances like the Hala System in Syria, AI-driven early warnings prove lifesaving. Hala’s system enabled rescue teams to prepare as warplanes approached, mitigating harm during airstrikes.
Likewise, AI can be used in monitoring online hate speech and predicting the likelihood of conflict escalation, revealing valuable insights for peacebuilding organizations to plan preventive efforts. AI can also be beneficial for the work of human rights defenders and activists, in collecting evidence of war crimes. For example, VFRAME collaborates with the Yemeni Archive, to minimize processing times, enhance analysis and identify the use of illegal munitions in war zones. Utilizing computer vision technologies, VFRAME develops and deploys neural networks trained on synthetic data to analyze conflict zone media.
AI can also be helpful in streamlining peace negotiations by analyzing extensive data, identifying patterns, and offering valuable insights to enhance decision-making, contributing to more effective conflict resolution. In Libya, AI-facilitated digital dialogues engaged 1,000 citizens, leading to a consensus and establishing an interim government within four months. Moreover, AI-driven sentiment analysis aids in understanding conflict narratives, allowing practitioners to work towards de-escalation and altering negative narratives.
Paving the path to ethical AI for peace applications
As we navigate the intricate intersection of AI and peace, ethical considerations become paramount. The multifaceted ethical complexities surrounding AI’s role in peace—ranging from biases and privacy issues to transparency and accountability—demand our steadfast attention. At “AI for Peace” we are embedding ethics in the design of AI interventions as a proactive approach to addressing the unintended consequences at the convergence of data and peace. By integrating principles such as “do no harm” and “conflict sensitivity” into the realm of algorithms, we safeguard against potential harms and uphold ethical standards.
By giving priority to participatory and inclusive AI, we involve local communities, stakeholders, and experts in the design, development, and deployment of AI technologies for peace. While AI may not eliminate violent conflict entirely (ultimately only humans can), it undeniably emerges as a valuable tool in the peacebuilder’s toolkit, contributing significantly to the establishment of lasting peace, with ethical considerations serving as the compass guiding our path forward.
Branka Panic
Branka Panic is the AI for Peace Founding Director, a political scientist, and an expert in international security, international development policy, and peacebuilding. She is a CIC Non-Resident Fellow focusing on researching the utilization of data-driven approaches to peacebuilding and prevention, conflict early warning/early action, and designing the pathways to establishing a Peacebuilding Data Hub.
AI for all: bridging the inclusivity gap
Artificial Intelligence (AI) has rapidly evolved in recent years, promising transformative changes across industries and societies. However, realising its full potential requires addressing two salient challenges: translating complex AI concepts and navigating culturally sensitive or taboo topics concerning bias and fairness. Endeavours like the ones from the Swahilipot Hub Foundation are making a difference and moving the community closer to achieving these goals.
AI has long been perceived as a realm shrouded in technical jargon and complexity, alienating those without a background in computer science. Bridging this knowledge gap is crucial to ensuring that AI is not reserved for the technologically elite, but is accessible to all. To this end, educational initiatives are vital. The Swahilipot Hub Foundation is a Mombasa-based NGO aimed at empowering youth in the Technology, Creative and Heritage sectors to grow careers and enhance the economic stability of people living in the coastal region of Kenya. They’re cultivating a data-centric culture by training over 900 youth in Mombasa, equipping them with essential skills in data collection, analysis, and presentation. More significantly, they’ve emphasized data privacy and policies, instilling in these young individuals the importance of ethical data practices. These efforts empower the youth to engage in discussions about data confidently and responsibly, breaking down the barriers of complexity.
Swahilipot Hub makes AI accessible for Mombasa youth
Beyond these educational efforts, Swahilipot Hub Foundation exemplifies data-driven decision-making. Research conducted in 2018 by Global Opportunity Youth Network, a multi-stakeholder initiative committed to creating place-based systems shifts for youth economic opportunity, shows that there are 562k youth in Mombasa, 44% of them are unemployed and 66% are estimated to be “Opportunity Youth” – young people aged 15-35 years who are out of school, unemployed, or working in informal jobs. These numbers have grown even further post the COVID-19 pandemic. With this in mind, Swahilipot Hub has since compiled a growing database of over 18,000 youth in Mombasa, meticulously assessing their skills, education levels and interests. This database has allowed them to link these young individuals with opportunities such as upskilling training, jobs, and scholarships. What’s truly remarkable is their transition to automation with the “Fursa” platform, driven by machine learning algorithms. This automated system streamlines the process of matching youth with opportunities, not only showcasing the practical benefits of AI but also providing a scalable solution for inclusivity.
Another formidable challenge in the realm of AI is navigating culturally sensitive and taboo topics, particularly those related to bias and fairness. AI systems that perpetuate biases or unfairly discriminate can have detrimental consequences for society. Swahilipot Hub Foundation’s collaboration with the Mozilla Foundation offers an enlightening example of community involvement to address this issue. Together, they have organized contribute-a-thons and Hackathons for the Common Voice project, where community members actively contribute Swahili sentences and validate them through text and speech. This inclusive approach ensures that AI development is culturally sensitive and respectful, reflecting the diverse voices and perspectives of the community. It not only fosters inclusivity but also demonstrates that AI can be a tool for representing and celebrating culture, not erasing it. Furthermore, through technology conferences like Pwani Innovation Week, Swahilipot Hub has brought conversations around the fourth Industrial revolution, AI, and the future of work to the Mombasa community, further emphasizing the importance of addressing these AI-related issues in a culturally sensitive and community-centric manner.
The Jitume Program, spearheaded by state-owned Konza Technopolis, empowers Kenyan youth with digital skills and job opportunities through 117 Jitume Centers equipped with computers and internet access, offering 16+ training programs in collaboration with renowned institutions like Thunderbird School of Global Management and Arizona State University. Their primary goal is to combat Kenya’s high unemployment rate (10.4% as of 2020). Swahilipot Hub Foundation’s partnership with Konza Technopolis reflects their dedication to AI education, introducing a “Jitume” center at Swahilipot Hub to provide Mombasa’s youth with accessible AI education. This initiative will continue to equip youth with the knowledge to engage in AI discussions and contribute to solving Africa’s challenges, bridging the gap between theory and practical applications while demystifying AI’s potential impact on careers and society.
In conclusion, Swahilipot Hub Foundation’s multifaceted approach to addressing the complexities of AI and ensuring inclusivity sets a remarkable example for organizations worldwide. Their educational initiatives empower youth to engage in AI discussions, while community-driven projects guarantee that AI reflects cultural diversity. Automation, government collaboration, and practical training opportunities further underscore their dedication to making AI accessible and beneficial to all. “AI for All: Bridging the Gap Between Complexity and Inclusivity” isn’t just a topic; it’s a vision realized by organizations like Swahilipot Hub Foundation, demonstrating that responsible AI practices are attainable, and the power of AI can indeed be harnessed by all.
Ziri Issa
Ziri Issa is an accomplished professional and visionary leader in the field of technology and innovation. With a BA in Information Technology from Maseno University, Ziri currently holds the position of Head of Technology and Innovation at Swahilipot Hub Foundation, where he is dedicated to fostering a culture of innovation in Mombasa, Kenya.
Advancing digital justice in the Global South
A Human Rights Framework for Governing Imported AI
The central issue surrounding artificial intelligence (AI) governance concerns the choice between self-governance by technology companies or legal frameworks established by governments. If there is agreement on the latter, this debate extends to the question of whether international standards or national legislation should be prioritised.
A robust foundation for responsible AI
International human rights law (IHRL) provides a universal set of norms and clearly defines prohibited actions, establishing a shared language for addressing human rights concerns in importing AI. It also offers well-established tests to assess whether an action constitutes a reasonable restriction or a harmful violation of human rights. This approach can help determine that, within a specific context, an AI system must not be deployed, even if its bias-related issues were fixed, because it still violates other interdependent human rights.
Under IHRL, States are legally obligated to safeguard human rights. It also applies to private actors through the UN Guiding Principles, which require businesses to respect and alleviate adverse impacts on human rights. A human rights-based framework ensures different actors involved in the AI lifecycle are held responsible for their actions. IHRL also mandates the adoption of accountability mechanisms and provides guidance on necessary measures to protect human rights.
Moreover, a human rights-based framework can contribute to this respect by requiring human rights impact assessments to be carried out during all phases of AI lifecycle and by ensuring that these assessments encompass all human rights that AI systems can adversely affect. This will offer, at an early stage, a space to adjust the design or to even halt development if human rights concerns cannot be addressed. This would also be accompanied by post-implementation periodic review and the adoption of an AI vigilance system during the deployment phase to detect post-operational risks as early as possible and promptly correct them.
Additionally, external monitoring, through independent, appropriately resourced, and qualified oversight bodies, is central to a human rights-based approach. These bodies can play an essential role in monitoring impact assessments, addressing discovered risks, and investigating AI systems to determine deployment conditions. They also play a crucial role in overseeing compliance by both State and private actors with their human rights obligations.
Finally, IHRL guarantees the right to an effective remedy for human rights violations. Both States and businesses are required to establish redressal channels to ensure access to justice for those affected. In this regard, ensuring access to judicial channels and establishing independent redressal bodies, such as ombudsman services and industry regulatory bodies is essential to attain effective remedy. Technology companies, in particular, should maintain human-in-the-loop remedy mechanisms and avoid automated remedy systems that lack the nuance necessary to comply with human rights standards.
More aware individuals can help develop their societies
Toward Responsible Imported AI in the Global South
Technology companies can play a key role in promoting a human rights-based framework to AI governance by advocating for IHRL as a common, universal understanding during discussions with governments. This is particularly crucial when dealing with governments in the Global South, where public interest may not always be a top priority when importing AI, sometimes coupled with the skeptical approach of regulators towards opening a space for multidisciplinary discussions during the law-making process.
Building trust with both the public and governments in the Global South is essential. Technology companies can achieve this through open communication, active participation in the law-making process. This could facilitate establishing a solid public-private partnership built on mutual trust for a thriving AI industry in the Global South anchored on the principles of responsible AI.
Technology companies can also offer technical expertise through training and workshops to those involved in the policy and law-making process. They can support countries in the Global South keen on adopting national AI strategies to ensure alignment with international practices and provide technical support on utilising AI systems in implementing digital transformation strategies.
Furthermore, technology companies can contribute to the development of a robust AI governance environment in the Global South by sharing best practices through conferences and expert roundtables and collaborating on research with relevant national institutions and think tanks. They can also provide access to resources, such as reports and toolkits, on developing and deploying trustworthy and responsible AI. This will allow stakeholders from the Global South to build a better understanding of internationally recognised principles and practices of responsible AI while considering the local context and propose reliable and actionable findings and recommendations.
In addition to these efforts, the following recommendations would be valuable:
- Establishing a multi-disciplinary forum of experts to address current and future uses of imported AI, respond to human rights risks, and integrate human rights into the AI lifecycle.
- Conducting public consultations with stakeholders, especially marginalised groups and affected communities, before releasing imported AI systems to the market.
- Implementing information literacy programmes and awareness campaigns to educate the public about AI and its potential impact.
Active engagement by technology companies will not only benefit society but also provide them with a valuable understanding of societal differences and cultural contexts. This, in turn, grants them influence over governments to ensure responsible AI deployment in the public sector, while supporting automated, digitised public services with greater affordability, accessibility, and protection against corruption. Ultimately, a human rights-based framework for AI governance fosters a future where AI is imported and deployed responsibly.
Ibrahim Sabra
Ibrahim Sabra, a Chevening Scholar, is a legal and policy expert on AI governance, internet regulation, and digital rights. He works at the University of Vienna’s Department of Innovation and Digitisation in Law and has diverse experience with leading global institutions, notably Columbia University. Sabra advocates for responsible technology that safeguards human rights and internet freedom, especially in the Global South.
Upskilling for responsible computing
By 2030, young Africans may constitute up to 42% of the world’s youth population 1 . As the youth population increases, so does the proliferation of technological use and innovations. One of the outcomes of technical upskilling among African youth is an increase in African developers creating and hosting code on GitHub, the world’s largest code-hosting platform 2 .
Thus, a young population that is upskilling itself means potential long-term consumers and creators of technology live in Africa. However, existing challenges, such as barriers to Internet access, technology-facilitated violence, and digital surveillance, remain a threat to the optimal use of technology. Thus, as more young Africans gain the skills to create technology, they need to be equipped with multidisciplinary skills that support ethical reimagining and design of innovations.
This approach is in line with recent research that underlines the consideration of the ethical, legal, and socio-cultural impact of Artificial Intelligence (AI), as such technologies continue to increase in demand in Africa 3 . Current efforts to meet this goal are evident across at least 35 African countries implementing Data Protection Legislation 4 . However, significant gaps remain in response to the call by UNESCO, driven by research across 32 African countries, that showed two important educational priorities: strengthening capacity in teaching AI and addressing the ethical implications of these technologies 5 .
Computer education can be centered on tech for good
To address this call to action, the Responsible Computing Challenge (RCC) was launched in 2018 by Mozilla Foundation 6 to address how Computer Science curricula could be redesigned to include ethical, multidisciplinary, and holistic approaches that empower students to think of the social implications of innovations. RCC was first launched in the USA and then expanded to Kenya in 2023 with an inaugural cohort of 8 Kenyan universities.
Across these universities, faculty have redesigned their computing curricula to include ethical approaches in robotics, computer programming, building animations, student teams, and design thinking. Implementing these curricula has impacted over 1,000 students, with at least 45 faculty members taking part in the redesign and teaching of the new curricula. These early results show that when computing curricula are developed with stakeholders, local contexts can be considered when approaching issues of computing and AI.
A second example of responsible computing in Kenya showcases how complementing classroom learning for tech students provides an opportunity to inculcate responsible thinking in innovations. KamiLimu 7 is a non-profit organization that fills the skills gap between classroom learning and industry competitiveness, by offering an 8-month structured mentorship program to tech students at Kenyan universities. Responsible computing is taught within the innovation track, where beneficiaries are taught to use ethical principles and approaches to build human-centered solutions to some of our most promising socio-economic problems.
For instance, OrganiXpert is a data science innovation that was built by students in the program who implemented a data-driven recommendation algorithm to recommend the right amount of organic fertilizer that supplements soil nutrients. The students worked with small-scale farmers to understand their yield output, fertilizer use, and potential implications of using such a predictive model, thus implementing a transparent model with the end-users in mind.
These two examples of responsible computing within the Kenyan context show that it is possible to support the ethical design of technology from a skilling standpoint. Computing education can be offered responsibly by understanding context and needs, using a structured approach that centers on the needs of the end users, using multidisciplinary models, mobilizing human and financial resources, and centering tech for good.
For the foreseeable future, if demand by the industry for multidisciplinary skills from technologists continues to increase, responsible computing models will be crucial in ensuring that innovations that are consumed and built have mitigated their adverse ethical implications.
References
- ICPD report: https://rutgers.international/wp-content/uploads/2023/07/Sub-Saharan-Africa-Rutgers-ICPD-interactive.pdf
- GitHub 2022 Report: https://octoverse.github.com/2022/global-tech-talent
- Introducing Responsible AI in Africa: https://link.springer.com/chapter/10.1007/978-3-031-08215-3_1
- Data Protection Report: https://assets-global.website-files.com/641a2c1dcea0041f8d407596/644d2c11739815a42ff6bd88_Round-up-of-data-protection-Africa-2022.pdf
- Artificial Intelligence Needs Assessment Survey in Africa: https://unesdoc.unesco.org/ark:/48223/pf0000375322/PDF/375322eng.pdf.multi
- Responsible Computing Challenge: https://foundation.mozilla.org/en/responsible-computing-challenge/
- KamiLimu: http://www.kamilimu.org/
Chao Mbogho
Dr Chao works at Mozilla Foundation, leading the inaugural Responsible Computing Challenge in Kenya that supports local universities as they redesign ethical AI and Innovation. She’s the founder and Program Lead of KamiLimu, a nonprofit organization that upskills tertiary-level tech students. She holds a PhD in Computer Science from the University of Cape Town, an M.Sc in Computer Science from the University of Oxford, and a B.Sc in Mathematics and Computer Science from Kenya Methodist University.
Artistic Expression and Democracy
AI has the potential to redefine the dynamics of artistic expression through democratizing access to artistic tools, becoming a catalyst for positive change.
In the global south, artistic expression intersecting with democracy presents a complex scenario with its own challenges and opportunities. Limited resource access, censorship, political instability, cultural hegemony are the few aspects that disrupt creativity being suppressed alongside impeding various artistic forms’ growth nodes.
Art can change the world through inspiration
Art is a powerful tool that can bring people from all over the world together. It creates a sense of hope and unity. Art can also be a way to express yourself and share your culture with others. It can help break down barriers and build bridges in low-resource environments, where the arts can be of primary importance and access to artistic resources is limited, AI acts as a democratizing force. Through AI-powered tools, artists have unprecedented access to a variety of media, allowing them to express their creativity despite the lack of traditional resources through which individuals can contribute to cultural narratives to enrich the societal tapestry.
I think that art could very much likely be the world changer. It can make individuals get inspired, dream big things, and never lose hope. My dedication is on how arts can be used to impact the world positively where I strongly believe that AI can be a means towards this end. I am looking forward to witnessing how artificial intelligence will assist in the worldwide cooperation.
Artists drawn from heritage that is rich, history which is full of resistance, and quest for change are common in the Global South. They make use of art to empower themselves, advocate for change and help their communities recover from disasters. Furthermore, digital platforms plus online media have created new possibilities for artists. This allows direct communication with the community and manage individual’s own media platform independently.
Responsible AI can play a crucial role in addressing the challenges faced by artists in the Global South. AI-powered tools can help amplify voices, facilitate cross-cultural dialogue, and mitigate censorship by providing channels for artistic expression. Responsible AI frameworks can promote transparency, fairness, and inclusivity in the distribution of artistic resources and opportunities.
In the context of democracy, the infusion of AI into the arts has implications that extend beyond the creative process. AI algorithms, when designed responsibly, have the potential to curate and disseminate diverse artistic content, challenging existing norms and creating a more pluralistic cultural discourse. This democratization of content distribution ensures that a broader spectrum of voices is heard, contributing to the democratic principles of free expression and cultural diversity. However, the deployment of generative AI in the arts also raises ethical considerations, necessitating a responsible and culturally sensitive approach. Guiding the implementation of AI in ways that align with the values of the communities it serves is crucial.
Recently, I collaborated with a Boston-based foundation on a project that brings together high school students from the United States and countries in the Global South. Through online platforms, these students engage in discussions surrounding Sustainable Development Goals (SDGs). They exchange ideas, perspectives, and experiences, culminating in the creation of collaborative books centered on a specific SDG. This initiative not only fosters cross-cultural dialogue and understanding, but also empowers the younger generation to address global challenges and contribute to sustainable development efforts through creative expression.
In essence, the convergence of art and AI holds great promise for advancing cultural democracy and promoting cross-cultural dialogue and empathy.
Eren Yagmuroglu
Eren Yagmuroglu is a strategic advisor known for his contributions in creating, leading, and driving organizations, brands, and societies. He has played a role in transforming the music industry as a member of the Recording Academy Grammy Awards since 2015 and the International Academy of Digital Arts & Sciences. He actively participates in the Harvard Business School Online Istanbul Chapter and contributes to the Harvard Business School Online Community.
Related stories
-
-
Empowering responsible AI practices
Explore how Microsoft is committed to advancing AI in a way that is driven by ethical principles that put people first. -
Global Perspectives: Responsible AI Fellowship
Stakeholders from developing countries discuss artificial intelligence, evaluate the technology’s impact around the world, and propose responsible ways forward. -