top of page
  • Writer's pictureMohamed Ashraf

Will AI Take Your Job?

If you’re reading this, then you are probably aware of the transformative nature of Artificial Intelligence (AI); you may even be wondering how redundant your job will be in the next 5 to 10 years. AI has had an impact on many industries, but in the workplace, it is subtle and brings about a more complex set of consequences.


There is a wave of adoption that comes with any general technology and follows a certain trend, but now, we are lucky enough to witness it from a new perspective. In a 2020 World Economic Forum study, it was estimated that the time spent on tasks performed by both humans and machines will be the same by 2025, leading to around 85 million jobs worldwide being displaced. This number may be daunting at first but by the end of this article, chances are, the feeling of dread will be gone.


Types of AI adopted in the workplace:


First, let’s unpack a few key terms. One of the main differentiators of AI from past technologies is its ability to recognize patterns and relationships without the need for discrete functions to aid it. Using existing data, we can train an algorithm to provide us with more connections than we can recognize ourselves. This is an incredibly high-level explanation of AI, but it will do for the purpose of this discussion.


Moving on to the workplace, the most common types of AI can be categorized by user intent. We have descriptive, predictive, and prescriptive algorithms. These categories fit the narrative as they are agnostic of the type of data used, meaning it works for text, images, numbers, and anything in between.


Let’s start with descriptive analytics, generally the most basic use of AI. This is where the algorithms used identify relationships between certain data, and an output. Examples of these could include the correlation between how long you take to get ready for work and how productive you are that day, or even an analysis of a tweet to assess the general emotions of that user.


Let’s assume that we can take that same tweet, compile it with all the other tweets from that user, and compare it against the price of a certain stock. A general model can be trained and used to predict the next, most probable stock price. We have now left descriptive analytics and moved on to predictive analytics, wherein an algorithm is able to use data to forecast what will happen next.


Moving away from tweets and stocks, we mentioned before that the type of data includes pictures, which is where things get even more interesting. Healthcare is an industry that is benefitting from AI at the moment due to prescriptive analytics, as images such as microscope slides, x-rays and other scans can be fed into algorithms that not only can classify potential future health conditions, but also provide possible courses of action. This is one of the fastest growing uses of prescriptive analytics.


Limitations of AI:


Already, we can see that the impact of AI spans across many professions and industries. This is not a new fact; a 2019 Gartner study found that US companies deploying AI rose from 25-37% in that year alone, a significant growth in production-level adoption, with more than half of high-level executives in a PWC report stating that implemented AI solutions have directly increased productivity.


The reports and studies all seem to tell the same story: AI solves problems, costs less, and doesn’t need any fringe benefits. To any employer, this is a dream scenario; but caution is always advised in situations such as these. In the same aforementioned PWC report, the continued need for explainability, provability and transparency was asserted.


AI doesn’t change the fundamentals of computing; it uses them in a different way. When we say that a model captures the inherent relationships between the data it was trained with, we take for granted that the data fed will never fully capture the real world. Paradigm shifts will always happen and AI, as it stands now, will usually fail at picking them up. Changes in things like consumer spending, market trends, and policy will always need retraining, more data, and a new algorithm altogether.


Another drawback of AI is its widely adopted view as a master key. The common notion held by many people that AI is a magical black box is not only unsustainable, but dangerous. There are limitations to what AI can do, and one of the greatest is embedded within the definition of AI established earlier.


The benefit of AI is its ability to identify patterns and relationships; but understanding what these relationships represent or why they even exist, is beyond the capabilities of this technology. Algorithms are a set of instructions followed blindly, and to assume that businesses would trust them blindly without any contextual feedback would be quite naive at best and fatal at worst.


It really isn’t that bad:


Now that we have an idea of what AI is, we can have a clearer picture of its position in the workplace. We mentioned transformative technologies earlier and their impact on industries. The most recent example was the emergence of the internet and the digital technologies that followed. A 2011 McKinsey report surveying 4,800 SMEs found that for every one job lost to technology-related efficiencies, 2.6 were created in the long term, this is expected of any disruptive innovation.


AI has found a way to, at scale, generate value out of a person’s data, and there has been a non-technological paradigm shift towards a more user-focused sense of data agency, but that’s a topic for another article.


Going back to whether AI will displace your job, the majority of studies say that it won’t. Ironically, it will most likely make you more valuable to the workplace. This idea is described thoroughly in a 2020 MIT research brief on the future of work and AI, which perceives AI algorithms as a way to offload repetitive, low-level tasks, while the cognitively demanding ones are left to the humans, calling the combined workforce a “super mind”.


We mentioned earlier that AI is quite effective at recognizing patterns and relationships, but this is a fraction of a typical human job. Take a salesman for example, an algorithm can be used to analyze the tone, content, and speed of a cold call, and optimize these parameters to have the highest callback rate. This sounds great, but any salesman will tell you that interest and intent are two different things, and AI has a long way to go before it can close major deals – human interaction must step in to convert that initial success into a relationship.


How can we brace ourselves?


At the other end of the spectrum, the world may seem bleaker. One of the obvious tasks AI has excelled in are virtual assistants – there to help customers with general day-to-day issues, or the reading or processing of legal documents, previously done by paralegals, but now executed in a fraction of the time by an NLP algorithm. It is no question that these are the efficiencies one thinks of when recounting the wonders of AI; but these efficiencies have in fact displaced people from their jobs, this is the hard truth.


The question now is this: how much risk are jobs facing with the onset of AI and what can we do to get ahead of it?

The answer to the first part of that question is packaged quite well in this website which does a great job of breaking down any profession into useful metrics to provide an idea of the chances of complete replacement. For most jobs that involve cognitive and creative tasks, the answer is an overwhelming no. AI has given humanity a chance to excel at one of the major selling points of our species, by contrasting it to basic pattern recognition, we are able to think and form relationships without the monotony of the workplace on our minds.


Humanity has the added benefit of retraining and relearning. If you recall the WEF study mentioning 85 million jobs potentially displaced, it also includes an estimation of 97 million jobs created due to the same technology. While 97 million jobs sound great, there is a significant skill gap between both sets of occupations, and closing that gap must be a responsibility taken on by both the employer and employee.


For the employer, the prospect of job replacement will only get you so far, investing in your employees is the most effective way to integrate AI into the workplace. The place of AI isn’t to devalue the work done by people, but rather to give them the time they need to do what really generates value for the organization: high-level, creative, and challenging tasks.


For the employees, if you can’t beat them, join them. AI is a fact, and it is here to stay. Your job is to adapt, integrate, and find out how you can leverage this tool to your advantage. Learning and training is great if it is offered to you, but the perspective that AI is here to take your job is most likely not true; you, as a human, are valuable and cannot be codified, yet.


Staying in the loop is a great way for everyone to get ahead of the wave. Resources such as LinkedIn Learning, Coursera and others offer great introductory courses on AI and its specific implications. Leveraging technologically literate colleagues that develop these algorithms at work is also an effective way of getting hands-on experience and finding out where you, as an employee, can make a difference that an algorithm can’t.


Reading up on these organizational trends either through news feeds, articles, or research papers is a great way to re-skill, empower yourself, and remove the uncertainty of dealing with an unknown from your mind. The responsibility of understanding a technology this pervasive and overwhelming is shared by all of us, and this article is just one of many that aims to help demystify the changes that are about to happen.

149 views0 comments
bottom of page