top of page
  • Writer's pictureAhmed Abaza

MLOps: Why is it the most important technology in the age of AI?

Updated: May 23, 2022

“Artificial Intelligence is the new electricity” - Andrew Ng

In the last few years, AI has been making its move from research and academic experimentation towards industrial application. Albeit all the excitement for AI in industry, it has met a novel challenge: AI adoption

.

Whereas researchers can seamlessly and often successfully work to develop AI in self-contained environments - without relying much on chaotic sources that can affect their AI thesis - in industry, it’s a totally different story. In other words, when taken out to the real, messy life, the AI system turns out to be brittle in the absence of a controlled environment and fails.


In industry, Gartner predicts that your AI investment will most likely fail, with 80% of AI algorithms never moving to production. This leaves the data scientists who built the AI, the business executives who invested in the AI, and the AI systems themselves frustrated from the lack of results. I will shortly explain how MLOps is the answer to this problem, but first let’s break it down.


Why do models fail in production?


AI is basically programs written by data. The computer looks at the data, learns the different patterns and is magically able to understand the ins and outs of a certain domain. With that in mind, you can imagine how in a real-world environment where the multi-sourced data is sporadic and inconsistent, it would be difficult for the AI to deduce patterns and optimize production in the midst of all that noise.


Not to mention, that an AI application in the same manufacturing environment can be 5% AI and 95% a mixed dish of legacy software, myriad of excel sheets and modern ERP systems that are glued together using some esoteric alchemy of software engineering, of which most are oblivious but thankful, that all systems are on. As a result, you can’t really blame the AI when it can’t produce significant results.


Furthermore, many data scientists who build the AI models are not essentially software engineers. This of course makes it a challenge to deploy their models and build all the necessary infrastructure to host their models. They usually depend on their IT support team who don’t necessarily understand how the AI exactly works. This creates a big diffusion of responsibility in maintaining the AI in production between the data scientist and IT, eventually leading to an AI support vacuum that ends up in completely failing the project at hand.


Additionally, the AI impact in industry rarely occurs without business decision makers and machines working together in harmony. Usually business users require easy-to-use interfaces to be able to manage the AI and understand why its predicting this or that (sometimes referred to as Model Explainability). Data Scientists sometimes face the problem of providing an adequate interface for the business user to be able to see and gauge the model performance live, explainability of the model, and controlling levers. For example, thresholds for accepting and rejecting someone for a loan, for the business to adapt the model to their business operation.


All of this creates a crisis when it comes to AI business adoption...

Is there a way around it? Is it a bird, is it a plane? It’s MLOps.


So, what is MLOps?

Any answer to the question, “How can we use this model?” can be considered MLOps. MLOps, or machine learning operations, is the answer to ‘deploying and maintaining machine learning models in production reliably and efficiently’. Curious about how it works?


At Synapse Analytics, our team has developed a conveniently named framework for using AI in business operations, namely ‘USE AI’ (We need to really slow the engineering, and focus on the creative..I know!)


Here’s how USE AI works:


U - Usable: which means that the business users can easily use the AI in their daily decision making. This takes the form of well-designed interfaces that are able to provide the control levers and the necessary analysis for all the stakeholders working on the AI. Usually, these are:

  • The Data Scientist: The data scientist’s role is to analyze, process and model the data. Furthermore, they can build reproducible experiments that are production-ready and are able to monitor and gauge their model’s performance as well.

  • The IT/Software Engineers/Data Engineers, who are able to easily build data environments for experimentation. They monitor system load, API requests, system access, provide necessary resources for the data scientists and continuously deliver the predictions of the AI.

  • The Business User: The people who are using the AI and working with it to perform highly effective decisions on a daily basis.


S - Scalable: where the AI can be easily scaled, to accommodate and adapt to the change in demand.

Usually in experimentation, the data is controlled and thus, there is rarely data drifts (where the data changes over time) or concept drift (where the properties of the variable we are trying to predict changes in behavior). Furthermore, a base model might be very accurate on a small data sample, but as we scale the algorithm on the whole data, the accuracy degrades and the model is not usable.



E- Explainable: this is a bit tricky, because sometimes it’s a challenge to explain an AI output, but a good model should give a hint about the whys of its prediction, even if it's not the best model in terms of accuracy.


AI adoption in business is a human-machine collaborative effort. Humans usually don’t trust what they can’t explain. In a business setting, where there are direct reports and accountability for decisions, the user usually requires an explanation of the output before making the decision. For example, a credit officer needs to know why a certain individual was granted a loan by the machine. This would put their hearts at ease in approving the AI prediction. Sometimes, it’s not the most accurate model that should be deployed, but the most explainable. Hence, some data scientists use challenger models: one that is admittedly less accurate, but has high explainability.



A - Adaptable: where the AI in production should be continuously retrained, and monitored for data drifts (where the data distribution changes over time) and concept drifts (where the properties of the variable we are trying to predict changes in behavior).


The data scientists should be able to have their models adapt to changing trends. Usually this is a tedious, time-consuming task. They have to constantly test the model, take it out of production, retrain it and then redeploy it, manually. Thus, the model has to adapt quickly and always be monitored for errors and performance -- especially in time-sensitive applications.



I - Impact: in business contexts, the AI is as useful as the impact it creates.

Businesses will invest in AI, to grow, save costs, or provide a differentiated experience or product to their customers. This is why impact is one of the very important metrics that should be monitored. For impact, we usually gauge it against 7 main points:

  • Responsibility

  • Cash

  • Labor

  • Time

  • Profit / Cost

  • Growth

  • Utilization of Assets



MLOps can put off a third 'AI Winter'


According to Nick Bostrom, in his book Superintelligence, there were two AI winters that happened in the 80s and 90s. If there is a looming, third AI winter in the industry it’s because of the lack of adoption in industry.


In most AI projects in business, significant impact in any of these domains, will make AI investment worthwhile. Our impact officers work very closely with our clients to consistently measure impact against the areas of focus and ensure that the AI is really transforming the business. Of course, for AI to be truly impactful one should always be careful not to trespass any privacy or data rights of their customers. This is why responsibility is a major point when it comes to impact monitoring.


When it comes to AI in industry, deploying and operationalizing impactful AI is the main purpose of it. If you are not the Googles, Facebooks and Spotifys of the world, most probably you are working with dispersed data environments that are challenging to tame, and to build effective AI models that scale and get adopted. With many stakeholders working together, each with their own agendas, a diffusion of responsibility in maintaining the AI occurs, killing any hope for AI success.


Without a clear way to measure the impact of the model, it’s very difficult to keep investing in AI and producing results out of it.


Here comes KONAN..


This is why we have built KONAN: an MLOps platform that is designed using the USE AI framework to enable organizations, businesses and governments to adopt AI and overcome the gap between AI piloting and AI production.


While many platforms try to replace the data scientists, we believe that data science is an art and will only get better if we enable the artists to express their best work. Giving the data scientist incredible powers to deploy, maintain, monitor and scale their projects, while helping them with all non-data science tasks, will ensure powerful data-centric deployments that will create impact and produce significant results.


Our design helps navigate the new age collaboration of human-machine interaction. If you are working or planning to adopt AI in your business, sign up to our MLOps platform KONAN and become one of our early adopters.


You can reach us on Facebook, LinkedIn, or Instagram.


277 views
bottom of page