top of page
Writer's pictureSalma Elhabashi

Deception, dependence, and distrust in LLMs

What are the psychological repercussions of LLM-reliance in MENA?


Imagine a world where machines evolve beyond their conventional usage as tools and become fully fledged companions to us, capable of impacting our speech and actions, and by extension, the disclosure of the ‘Self’. Now, imagine being right at the cusp of this, and not realizing it.

Today, Large Language Models (LLMs), such as the all-popular ChatGPT, have taken on tasks previously thought of as exclusive to our species' unique linguistic conventions and reasoning. These include writing coherent articles, beautiful poetry, answering complex questions, and translating multiple languages.


LLMs are computer programs that utilize artificial intelligence (AI) to generate text that resembles human-like language. They are trained on vast textual data and can perform various language-related tasks. Now widely accessible (although not entirely), LLMs have been rapidly adopted almost everywhere.


The MENA region's growing adoption of LLMs poses a complex and multifaceted landscape of opportunities and challenges, necessitating individuals and societies to navigate issues concerning trust, persuasion, and the impact of technology on human interactions and societal well-being.


As a budding researcher in Social and Cognitive Psychology, I find this intersection thrilling, especially because there's a glaring gap in understanding how LLMs affect us psychologically. During my research, I have found that the one crucial question that should be brought to the forefront is the following: What are the potential psychological implications of relying on machines to perform such critical tasks, especially in Egypt and the broader MENA region? While there’s an abundance of research about LLMs’ mechanics, the examination of its social and psychological impacts is lacking.


In this article, I will be exploring some of the social issues tied to over-reliance on LLMs such as persuasive impact, overdependence, and trust issues. These issues also introduce us to a variety of psychological impacts, like influencing attitudes, beliefs, and behaviors, as well as inducing stress, anxiety, reducing critical thinking and collaborative interactions. As we dig deeper into these psychological challenges, we will better understand how they manifest and impact individuals and societies in the MENA region.


Power of Persuasion


Recent research highlights how LLMs possess a notable persuasive capacity. They can influence and shape how we see things, extending their impact beyond mere data provision. To test the extent of the influence, scientists compared persuasive messages crafted by LLMs with those written by humans. Results showed that LLM-generated messages (despite often containing conflicting information) can shape attitudes and behaviors in various domains more than their human-written counterparts. This spans vaccination support, health and lifestyle habits, sustainability advocacy, and political perspectives.


Why are we easily persuaded? An interesting theory by George Kelley called the Personal Construct Theory helps us understand LLMs’ persuasive capacity. The theory claims that humans use mental frameworks called constructs. These constructs are shaped by our experiences to interpret information and make sense of the world around us. So, when LLMs offer up bits and pieces of information to us, they inadvertently reconstruct the ways in which we understand our lived-worlds, and thereby influence our mental frameworks.


However, LLMs don’t function randomly or ‘objectively’. LLMs can be ‘opinionated’ depending on their training data, which can then either strengthen or reshape our mental frameworks by sharing information that aligns with or challenges our existing beliefs.


On the bright side, LLMs persuasive powers and capacity to reshape mental construct can make it a powerful tool for therapists. For instance, LLMs can provide guidance on anxiety management techniques and self-help tips that are more persuasive than those offered by human clinicians. The additional information it offers might also facilitate a thorough reassessment of one's existing cognitive constructs, potentially yielding better results.


There is a darker side though. Unconstrained chatbots can also end up condoning dangerous behavior, such as self-harm and even suicide, if not properly audited. Safe and responsible use and deployment is therefore absolutely necessary.


In the realm of politics, we see the double-edged use of LLMs more clearly. As we know, LLMs can generate rather compelling arguments, like those crafted by human experts. This prompts us to ponder the future of election campaigns. Could politicians start using AI to prepare speeches and refine policies? It's a real possibility, and I suspect it's already happening. Considering that LLMs can focus on generating top-notch content, it’s easy to see how they can offer up strong counter-arguments and evidence in a flash, transforming the landscape of political discourse.


But, there’s a potential downside to using LLMs. They can sometimes sneak biases into their information and suggestions. This study explored the impact of opinionated LLMs on participants' writing and attitudes and found that LLMs influenced participants' writing and later attitudes, showcasing "latent persuasion by language models."


It’s like a 'Hive-Mind' effect, also known as 'Groupthink'– engaging with opinionated LLMs is akin to joining a collective swarm of bees buzzing with identical ideas and beliefs. Just as bees conform to a shared perspective in a hive, individuals using these LLMs may naturally adopt and spread the opinions inherent in the LLM's responses. This paper underscores that this phenomenon can blur personal identity and individuality, leading to depersonalization—a disconnection from one's thoughts and emotions. This and technology's potential influence on opinions can cause stress, anxiety, and a sense of losing control over cognitive processes and decision-making.


Here's another example that raises concerns about LLMs’ responsibility to offer accurate and unbiased information: Amazon's Alexa once stirred controversy by calling Jesus Christ fictional and praising Prophet Muhammad as wise. As we take this incident in and imagine its effects on our society it’s easy to see how that could spark quite the commotion, don't you think?


Speaking of commotions, picture this: You are sipping tea at a cafe in Cairo, asking your handy-dandy biased AI about the pyramids, and it claims aliens built them, not as a theory, but as a matter of fact. Cue puzzled faces, debates, and chaos. Believers in Egyptian engineering clash with conspiracy theorists, media frenzy erupts, politicians exploit it, and historians despair. Considering this scenario, it's not farfetched to consider how a biased LLM can ignite a pyramid-sized controversy!


It is therefore imperative that we remain wary of LLMs’ persuasive powers.



Dependence And Trust


As LLMs become a part of our daily lives, a growing concern is that we might get too cozy with them. One big worry, especially for students, is that they might start relying on them too much for tasks that they should handle independently.


Imagine this scenario: a young student, let’s call her Amira, is preparing for her final exams. She's got her trusty ChatGPT by her side. It's helping her with research, answering questions, and even assisting in writing essays. She is acing her subjects, but there's a downside. You see, Amira’s dependency on ChatGPT is growing by the day. She's become inseparable from her digital companion – using it for tasks she could easily handle herself. This might not seem like a problem at first, but it could have serious consequences.


Psychologists are worried that students might face significant psychological repercussions because of such over-reliance, such as impaired growth and development, weaker problem-solving skills, and a sharp drop in critical thinking abilities.


It's not just about dependence; it's also about trust. This fascinating study investigated whether people trust their peers or computer algorithms more when tackling different tasks. Turns out, when things get complicated, we tend to lean more toward algorithmic methods rather than our peers.


However, leaning too heavily on LLMs like ChatGPT can make us overlook the richness of human collaborative interactions, possibly leading to social isolation. Ultimately, this could harm their social skills and interpersonal relationships.


Now, let's bring Amira back into the picture. Her growing dependency on ChatGPT could affect her trust in her fellow colleagues. She might miss the valuable human interaction and collaborative learning experiences essential for personal growth and a well-rounded education.


Safe And Responsible Use


LLMs are potent, persuasive tools that can shape our thoughts and belief systems. While they can be incredibly helpful, we should exercise caution and avoid overreliance on them. Striking the right balance is essential – we should be leveraging their capabilities while continuing to nurture our cognitive skills and maintaining trust in human abilities. By doing that, we can build genuine connections and smoothly navigate this ever-changing landscape. After all, responsible use is key!


In short, we cannot get too comfortable with these AI systems. Over-reliance on LLMs for cognitive tasks might deter us from developing critical thinking skills. We do not want to get so cozy that we forget how to think independently. We must balance leveraging AI to make better decisions, work more efficiently, and enhance accuracy and productivity while retaining our ability to think independently and collaboratively.


In light of these considerations, it's worthy to note that in testimony before Congress, Sam Altman, the CEO of OpenAI, has proposed a three-point plan to regulate AI companies, including forming an AI government agency, creating safety standards, and requiring independent audits. This approach aims to ensure that AI technology is developed and deployed in a responsible and accountable manner, addressing the very issues we've been discussing.


Pathways Forward in MENA


In the MENA region, the adoption of LLMs like ChatGPT is on the rise. The UAE alone is predicted to reap commercial benefits worth $5.3 billion from investments in Generative AI by 2030. Although this stands as a testament to the potentiality of GenAI (of which applications of LLMs are a subset), it also necessitates investment in research pertaining to its responsible use and the psychological impact on users.

I'm emphasizing this because finding studies on how these AI systems interact with people in Egypt and the broader MENA region has been a real uphill battle for me. Research is abundant about its applications and use cases, but is severely lacking when it comes to the social and psychological impacts.


In a recent interview with Eng. Alaa Melek, an AI researcher, she discusses the various obstacles contributing to this lack of research. She outlines issues related to open data and funding problems, and emphasizes the urgent need for a collaborative, transdisciplinary approach to research (you can read the full interview here).


Adding to the complexity is the challenge of regulatory frameworks and socio-political structures that complicate the implementation and interpretation of AI regulations in Egypt (you can read more about this here). Addressing these gaps and challenges is vital to ensure responsible and informed integration of LLMs in our societies.


 

References:


The American Women’s College Psychology Department, & McGrath, M. (n.d.). Chapter 14, part 2: Personal construct theory. PSY321 Course Text Theories of Personality. https://open.baypath.edu/psy321book/chapter/c14p2/


Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-87480-9


Bordia, S. (2023). Using Large Language Models to Assist Content Generation in Persuasive Speaking. The Stanford Journal of Science, Technology, and Society, 16(2).


Cabral, A. R. (n.d.). Generative AI could help GCC countries reap $23.5bn in economic benefits by 2030. The National. https://www.thenationalnews.com/business/technology/2023/07/13/generative-ai-could-help-gcc-countries-reap-235bn-in-economic-benefits-by-2030/


Castro, H., Bergman, D., Staloff, J., Rosenberg, M., Kyzar, C., Lusk, M., Presbyterian, H. M. H., Rao, S., Kirsch, M., Pak, C. D. and T., Curry-Winchell, B., Downey, K., Cohn, S., KevinMD, T. P. by, Jeret, S., Burdick, T., Parker, L. J., Erodula, N., Anonymous, … Otano, A. G. (2023, June 27). Unraveling the complexities of CHATGPT-dependency disorder: Are we over-reliant on ai?. KevinMD.com. https://www.kevinmd.com/2023/06/unraveling-the-complexities-of-chatgpt-dependency-disorder-are-we-over-reliant-on-ai.html


Chowdhury, A., & Ramadas, R. (2022). Cybernetic hive minds: A Review. AI, 3(2), 465–492. https://doi.org/10.3390/ai3020027


Elsarta, A. (2023, August 29). Can ai replace your doctor? A look at the future of healthcare with AI researcher Alaa Melek. Synapse Analytics. https://www.synapse-analytics.io/post/can-ai-replace-your-doctor-a-look-at-the-future-of-healthcare-with-ai-researcher-alaa-melek


Griffin, L., Kleinberg, B., Mozes, M., Mai, K., Vau, M., Caldwell, M., & Marvor-Parker, A. (2023). Susceptibility to Influence of Large Language Models.


Hamdan, M. (2023, March 13). The impact of CHATGPT on students: Positive effects and potential negative effects. Modern Diplomacy. https://moderndiplomacy.eu/2023/03/14/the-impact-of-chatgpt-on-students-positive-effects-and-potential-negative-effects/


Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023a). Co-writing with opinionated language models affects users’ views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581196


Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023b). Co-writing with opinionated language models affects users’ views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581196


Karinshak, E., Liu, S. X., Park, J. S., & Hancock, J. T. (2023). Working with AI to persuade: Examining a large language model’s ability to generate pro-vaccination messages. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1–29. https://doi.org/10.1145/3579592


OpenAI platform. OpenAI Platform. (n.d.). https://platform.openai.com/docs/supported-countries


Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing and Novel Ways About Politics: Evidence from Experiments and Human Judgement.


Son, G. M. V. (2017, December 2). Does Amazon’s Alexa say jesus christ “is a fictional character”? Catholic Stand. https://catholicstand.com/amazons-alexa-says-jesus-christ-fictional-character/


Turc, J. (2022, April 8). Unconstrained chatbots condone self-harm. Medium. https://towardsdatascience.com/unconstrained-chatbots-condone-self-harm-e962509be2fa?gi=5b1f419496f3


WebMD. (n.d.). Hive mentality: Pros and cons, signs, and more. WebMD. https://www.webmd.com/mental-health/what-is-hive-mentality


Yacoub, A. (2023, May 23). The new “Egyptian charter for responsible ai”: Between interpretation and enforcement. Synapse Analytics. https://www.synapse-analytics.io/post/the-new-egyptian-charter-for-responsible-ai-between-interpretation-and-enforcement


Zorthian, J. (2023, May 16). OpenAI CEO Sam Altman agrees AI must be regulated. Time. https://time.com/6280372/sam-altman-chatgpt-regulate-ai/

322 views0 comments

Recent Posts

See All

Comments


bottom of page