The new ‘Egyptian Charter for Responsible AI’: between interpretation and enforcement
Last month, Egypt’s National Council for Artificial Intelligence (AI) announced the launch of the “Egyptian Charter for Responsible AI”. Held under the chairmanship of the Minister of Communications and Information Technology, H. E. Dr. Amr Talaat, the council’s launch of the charter serves as the country’s first attempt at articulating the need for the responsible development, deployment, and subsequent management of AI.
The charter sets forth a comprehensive framework to ensure that AI systems are designed and utilized in accordance with values of fairness, transparency, human rights, and accountability. To that end, it emphasizes 6 key principles: Human rights and dignity, fairness and equity, transparency and explainability, accountability and responsibility, privacy and data protection, and finally collaboration and international cooperation.
Interested in learning more about the Egyptian Charter for Responsible AI? You can check it out here.
This is undoubtedly a commendable milestone that demonstrates the country’s dedication to staying ahead of the curve and setting the stage to fully embracing a future of emerging technologies. It also comes in the midst of a global call for the regulation of AI and the development of guardrails to facilitate said regulatory practices.
However, it should be noted that while articulating some necessary guidelines is imperative, actualizing and implementing these principles is a different undertaking – one that requires us to pay attention to certain features characteristic of governance in the region that might make AI regulation especially challenging.
The difficulty of implementing adequate regulatory mechanisms for ethical AI is not lost on policymakers. So, although the charter calls for the establishment of regulatory frameworks, guidelines, and standards that align with the principles outlined in the charter, it is not entirely immune to loopholes and possible implementation gaps.
Regulatory and ethical considerations
In general, guidelines for responsible AI can be difficult to uphold because of several factors. Egypt’s charter is no different: its effectiveness hinges on its interpretation and subsequent enforcement. These two main axes on which the charter stands should be closely inspected.
First, different stakeholders in the AI lifecycle including developers, deployers, and users, may have varying interpretations of the principles as well as varying degrees of accountability. This is also problematized by the charter’s lack of specificity, which is symptomatic of policymakers’ occasionally ambiguous phrasing. Misinterpretation potentially leads to inconsistencies in responsible deployment as well as gaps in implementation and enforcement.
For example, the charter emphasizes international cooperation as a key principle, but ethical loopholes may arise if there are disparities in the interpretations of ethical standards and enforcement across different countries.
To that end, ensuring consistent and rigorous enforcement mechanisms, as well as regularly reviewing and harmonizing them, is crucial to ensure effective compliance with regulations. However, it should be noted that Egypt often suffers from a particular lag in executive regulation and enforcement that might make the implementation of such guidelines difficult.
In recent years, Egypt has seen a rise in the number of laws and regulations pertaining to the tech sector. While that indicates a serious commitment to regulating this fast-paced transformation, following up with effective implementation has been a challenge. As legal expert Mahmoud Shafik notes in an article cataloguing all recent tech regulations in the country, “The complexities surrounding the drafting, lobbying, and ratification of laws and their executive regulations often result in a lag between the creation and implementation of these regulations”.
This is further reflected in literature on regulation in Egypt which identifies a critical gap between the ‘number and quality of legal reforms on the one hand, and the actual enforcement of these reforms on the other’.
The lag in actual enforcement can be attributed to a number of factors. These include long-standing over-centralization, which naturally affects the enforcement of laws, as well as complicated political and socio-economic structures. You can read more about this in a previous article on Wamda, in which we discuss several structural challenges pertaining to AI adoption and regulation in MENA.
The establishment of new AI government regulations signifies a significant milestone in the responsible development, deployment, and management of AI. By addressing ethical considerations, promoting accountability and international cooperation, and protecting individual rights, these regulations pave the way for a future where AI technologies contribute positively to society.
However, it is crucial that regulations are reviewed regularly and remain adaptable to encourage innovation while ensuring that AI is developed and utilized in a manner that aligns with our collective values and aspirations. It is also important to note the difficulty that governments face when attempting to create adequate regulatory frameworks at the same pace that the technology advances or, more importantly, at the same pace as each other.