top of page
Writer's pictureMarina Simonian

Why is AI Literacy Important for Egypt’s Gen Z & Alpha?

Imagine you are a teacher to a classroom of 10 jolly five-year-olds, their eyes wide open, gleaming with curiosity. Now, consider that according to the World Economic Forum, with the rise of Artificial Intelligence (AI) technologies like ChatGPT, up to 6 or 7 of those students will have a job that does not yet exist!


Indeed, it is hard to imagine a job that does not yet exist – but we do need to be proactive in the implementation of holistic learning and inclusive STEAM education (Science, Technology, Engineering, the Arts, and Mathematics).


Our next generation of future leaders ought to understand what AI is and how it works – this is referred to as AI literacy.


In brief, AI refers to computer programs or systems that can perform tasks that make them seem smart, in such a way that they exhibit human cognitive abilities or human intelligence. Let’s rewind a step – what does intelligence mean? Well, Stephen Hawking once said that “Intelligence is central to what it means to be human”. Following that logic, it must be that intelligence refers to what it means to be human, to be able to understand, learn, adapt, and interact! Now, let’s fast-forward a step – AI is then simply computer systems that are able to do ‘intelligent tasks’ in an analogous manner to humans.


AI is not new – it has officially been around for the past 65+ years, albeit not to the same capability level of current AI systems! The term ‘Artificial Intelligence’ is said to have been coined at a summer conference in Dartmouth College in 1956. However, Asimov’s ‘Three Laws of Robotics’ first appeared more than 10 years before that, in 1942, which laid the framework for ethical and safe development of robotics technology.


In 1950, Alan Turing created the Turing Test to determine if a machine is intelligent – if a human believed the output of a machine to be human-generated, then that machine was said to be intelligent. Of course, the Turing test was apropos to the 20th century, as now, most of our modern AI systems like ChatGPT, can rather seamlessly pass the Turing Test.


But it wasn’t until the late 90s that we humans began to believe in the intelligence of the artificial systems we had created. The first event that amazed the world was in 1997 when Deep Blue (IBM’s chess-playing supercomputer) beat world chess champion Garry Kasparov; again in 2011 when IBM’s Watson computer system won the game of ‘Jeopardy!’, and later in 2016 when Deep Mind’s AlphaGo beat world Go champion Lee Sedol at the Ancient Chinese game of Go. It was then that humanity froze for a moment, as our intelligence, the very ingredient that is believed to make us superior, was matched by AI-based computer systems in specific tasks, i.e. at a specific game.


In the midst of such rapid development and deployment of more affordable and capable AI systems, it is important that we enhance AI literacy and make it accessible to all our youth. Often times AI researchers refer to AI models as ‘black boxes’. AI literacy aims to Break Open Layers of Technology embedded within such ‘black boxes’ (something I like to refer to as ‘BOLT’), and ultimately see how AI models receive inputs, learn, and produce outputs. Such BOLTs are needed at scale to empower the next generation!


As a major AI enthusiast and AI literacy advocate, having completed my Master of Science focused on AI & Machine Learning at Oxford University, in 2023 I founded ‘AiQ Lab’, a social enterprise focused on AI literacy. We are one of the region’s first providers of game-based workshops and camps focused on inclusive AI education for the next generation. Our ethos is very much to enhance AI literacy globally, with a focus on MENA, and our mission is to nurture the positive use cases of AI (i.e. ethical AI) to enhance human intelligence.


Having also served as Oxford Mathematics Ambassador, I have led multiple workshops for K-12 students in the UK focused on critical thinking and mathematical problem-solving. As part of AiQ Lab, we are hosting an initial ‘Introduction to Artificial Intelligence for Students’ series at various schools across Egypt. We have designed a game-based workshop that also reflects the 5 Big Ideas of AI, as popularised by the AI for K-12 initiative in the US (see graphic from AI4K12).


Public perception in Egypt is changing and evolving, as we see the younger generation embracing disruptive technologies. The precedent notion of AI systems as science fiction, and “da fel aflam bas” in Arabic (translated to “this is only in movies”), is no longer the case! The older generations are taking guard that AI is a reality. I believe the extent to which AI will develop and progress is still underestimated.


Egypt published its National AI Strategy in 2021, to encourage AI research and the embedding of AI technologies. One of the key pillars is human capacity building, which focuses on raising general awareness of AI via formal education and training, including in schools and universities. Our very mission at AiQ Lab is highly intertwined with the latter, and we believe that we ought to start exposing primary and secondary school students to the very foundations of AI, to develop an understanding of what AI is, its positive use cases, benefits, as well as risks and limitations.


It is also vital that we understand the importance of responsible AI and safe machine learning – at the core of which is understanding algorithmic bias and robustness of AI systems. By algorithmic bias, we refer to the biased predictions or outputs generated by AI models, which includes gender and racial bias, and age discrimination. This usually arises as current AI models are fed a myriad of data that has historically been biased, so if we feed BIAS IN, we produce BIAS OUT!

 

For example, if students in the MENA region relied on ChatGPT to describe the 10 most influential figures, they would mostly receive a list of white males. When I prompted the same question, I thankfully got 1 extremely influential female figure (Marie Curie). There is a lack of diversity in the output generated – which may render it biased. Our students would neither hear about Maryam Mirzakhani (Iranian mathematician who received the prestigious Fields Medal, mathematical equivalent of a Nobel Prize), nor Dr. Ahmed Zewail (Egyptian chemist who won the 1999 Novel Prize in Chemistry), nor world-renowned and knighted Prof Sir Magdi Yacoub (Egyptian surgeon), nor Dr. Ali Moustafa Mosharafa (theoretical physicist, often referred to as the Egyptian Einstein). Dr. Ali Mosharafa was a well-regarded friend of Albert Einstein, who contributed to the development of quantum theory and the theory of relativity.

 

Students should also engage with “adversarial attacks”, which is when AI models are ‘tricked’ into misclassifying an input, and are another example that highlights the effect of bias in training data. At the 2018 Computer Vision and Pattern Recognition Conference (CVPR), a paper was published showing how computer vision in self-driving cars can be “adversarial attacked” – meaning the AI model is ‘tricked’ into not correctly recognising objects or street signs. The example in the paper had a ‘stop’ sign with graffiti (stickers) as the input (see Figure 2 from the ‘Robust physical-world attacks on deep learning models’ paper). Given the AI model deployed in the car has largely been trained on images of clear stop signs (i.e. without graffiti), it is not able to correctly categorise the graffitied stop sign as a stop sign and instead outputs that it is a ‘45 Speed Limit’. As you can imagine, this is quite dangerous. One solution has in the past been training AI models on diverse data without a skewness or bias.

 

We need to also avoid anthropomorphising AI, meaning to stop giving AI tools human-like descriptions when describing its behaviour. As Ben Garside (learning manager at Raspberry Pi Foundation) mentions, demystifying tech for the younger generation is crucial to empower them to make informed choices about how they engage with such tech – central to this, is avoiding anthropomorphism as it hinders AI education. For example, he suggests instead of saying an AI system embedded in a smart speaker ‘listens’ and ‘understands’, it is better to describe it as ‘receives inputs, processes data, and produces outputs’.

 

AI ought to be used to enhance our intelligence, not replace it. For example, as mentioned, ChatGPT cannot yet replace a human in all aspects, especially when we ask more complex logic or problem-solving questions. I tried prompting ChatGPT to produce an answer to one of American mathematician, Martin Gardner’s tricky puzzles or catch problems, which are solved often via an “off-beat angle [or] catch element”. The question as taken from his ‘Entertaining Mathematical Puzzles’ book was as follows: ‘Can you place ten lumps of sugar in three empty cups so that there is an odd number of lumps in each cup?’. Of course, at first glance, 10 is not divisible into 3 odd numbers. However, as humans, we have a certain ability to think creatively about a “catch” – and can figure out that we can divide 10 into 2 odd numbers and then simply place the third cup in one of the other two cups. ChatGPT repeatedly struggled with this puzzle and could not get the “trick”.


As many believe we are at the crux of developing Artificial General Intelligence (AGI) – a more general strong form of AI, where AI systems are able to perform a multitude of tasks at the same level as humans – making AI safe is of imminent importance. One pillar of AI safety is democratizing access to AI literacy – only then will we be able to implement the correct guardrails to create highly ethical AI systems!


Now, reflecting back on the rapid progress of AI in recent years … while remembering our classroom of 10 jolly five-year-olds, 6 to 7 of whom will need to learn the skills needed for a job that does not yet exist, but likely powered by AI technologies …


We can collectively agree that we ought to make AI literacy accessible to all! And as soon as possible. As described earlier, AI literacy is about educating the next generation about AI – understanding AI systems and how they work! Democratising AI literacy across the MENA and Africa region is key – and that starts with rethinking computing education, developing frameworks to gauge the minds of our next generation of leaders with computing prowess and lowering the technological barriers.


AI is not magic – it is built on the foundations of mathematical logic and code! In a nutshell, AI is for everyone. 

 


 

References:


Eykholt, Kevin & Evtimov, Ivan & Fernandes, Earlence & Li, Bo & Rahmati, Amir & Xiao, Chaowei & Prakash, Atul & Kohno, Tadayoshi & Song, Dawn. (2018). Robust Physical-World Attacks on Deep Learning Visual Classification. 1625-1634. 10.1109/CVPR.2018.00175.


 

Bio

Marina Simonian is a major AI enthusiast and AI literacy advocate, and an experienced investment professional with 7+ years of experience (across private equity, investment management and investment banking). Having recently completed her Master of Science focused on AI & Machine Learning at Oxford University, where she also served as Oxford Mathematics Ambassador and Oxford Women in Business (OxWIB) Ambassador. Marina holds the Student Leadership Accreditation (Gold Award) from the SSAT in the UK, and is certified in Machine Learning Safety by the Center for AI Safety in the US.

352 views0 comments

Comments


bottom of page