Will we ever be ready for True AI?
"Mega Corp. has finally achieved artificial superintelligence." Imagine reading this headline the next time you open your favorite social media app. How will this affect your life? Various influential companies are focusing on unboundedly increasing the prowess of artificial intelligence (AI) technologies. But will we, as a society, ever be ready for such technologies?
It is difficult to discuss this without sounding fanatical; but let’s try to reason about its attainability. Although the intelligence of current AI technologies is sometimes laughable, multiple surveys of AI experts' opinions give a significant probability of reaching human-level intelligence in the next 30 years. The issue is more pressing than you think because it is argued that that we are predisposed to care only about short-term threats (think of the decades-old reports predicting climate change catastrophes and our blatant inaction). Are we unabashedly making the same mistakes in the face of what is arguably the most impactful event in the history of the planet?
The second concern with ASI is the limits of its power. The latest step of intelligence raised the bar of impressive feats from peeling a banana, the dexterity of apes, to controlling a flying robot on another plane (see: NASA’s Ingenuity robot). What would the next step of intelligence lead to? Take into consideration that this time the new species will not have the biological limitations that hinder our cognition; it will be tireless, intelligently designed, scalable to many data centers, immortal, emotionless, ... etc.
I hope it is clear now that ASI is a worthwhile issue; but what exactly needs to be prepared for? There is a broad spectrum of challenges ahead. For instance, there is the engineering challenge of how to avoid technical mistakes that lead ASI to commit disastrous actions. There are also socio-political challenges of ensuring that such technology will not fall into the wrong hands, or that its power will not be concentrated – further exacerbating the worst dynamics currently at play. Additionally, there are philosophical challenges concerning our goals. Informed by the literature on value alignment, this article will focus on the most impactful challenge: deciding on the goal ASI should pursue.
Let's return to our imaginary scenario; we woke up finding that some company has built an ASI. What should we do with it? What might typically come to mind is solving global, headline-grabbing problems: curing cancer, fighting the pandemic, eliminating poverty, terraforming Mars, ... etc. Unfortunately, this will not be easy at all. The difficulty lies in communicating with ASI our redlines.
For example, ASI can cure cancer by simply blowing up the planet. We could try to go the extra mile, rely on specificity, and meticulously list what is okay and what is not; but, in AI, such attempts are known to be impractical and blunderous. A canonical example of this is CYC. Theoretically, we could possibly avoid those issues by telling it why we want to solve cancer in the first place. It will not blow up the planet if it knows we want to decrease suffering and increase well-being. In more concrete terms, it would be adequate only if we communicate our root goals to ASI so that it can directly understand what is acceptable and what isn't.
But what are our root goals? Why do we want to cure cancer? Why do we do anything?
The first human civilization was founded 6,000 years ago. Since then, humanity has made countless, transcending strides in diverse aspects. Numerous epics throughout the centuries give you the impression that humans have a motivating goal for their bold decisions. Yet, when you try to look for one, you can only be surprised and disappointed to find out that there is not a known, concrete, justifiable goal for any society.
To amend that, the reasonable step would be to look into philosophy. The philosophical field that attempts to pinpoint this goal is Value Theory, and it dates as far back as Plato. Within this field, the most prominent goal is the qualia of pleasure, i.e. hedonism.
Hedonism entails that pleasure comes before everything – that pleasure is more important than morals, freedom, knowledge, beauty, honor, … etc., and we are not comfortable with giving up on those. Thus, neither philosophers nor laypersons agree on hedonism. Concurrently, we cannot choose all those aforementioned goals since they can easily conflict with one another. Alternative goals have less luck in finding consensus between philosophers. But we have to choose a goal for ASI eventually; what should it be then? If we cannot find a good enough answer in Value Theory, where should we look next?
Without consensus, maybe we should bring democracy into play– that is supposedly our way of resolving conflicts. However, that is the pinnacle of the problem in my opinion. In democratic countries, most people believe that democracy is an unquestionable solution to conflicting opinions. However, that is an unhealthy, faulty perspective because democracy is not flawless. As history shows us, a supposedly democratic leadership often leads to questionable decisions. For example, many leaders across the ‘free’ world arguably should not have been elected at all. Luckily, such mistakes in our human world are reversible. In the case of ASI, mistakes like these would cost us absolutely everything.
In political philosophy, some theories suggest that for democracy to be useful, the masses need to be informed. That said, in our case, the information needed to decide the ultimate goal of humanity is not available. For instance, neuroscience does not have a clear view of our brains yet. A lot of relevant mysteries still prevail like the nature and function of consciousness, which was explored in a previous article. Second, the masses are not even aware of the issue, not even leaders. Given this two-fold epistemological reasoning, we should call the premise that ‘democracy would lead to a happy ending’ into question.
Am I trying to paint an ominous picture of the future? Yes, perceiving this picture is the first step to changing it. In other words, acknowledging the possibility of such a future is the first step to trying to opt out of it or bracing for it. The philosophical issues presented in this article are not given enough attention and, the limited attention given to them mistakenly suggests that the challenges of ASI are purely engineering ones –which, as an engineer, I find to be erroneous. Under the umbrella of capitalism, we are rushing towards more intelligent machines, pouring a disproportionate amount of investment into AI with respect to other important fields like philosophy and neuroscience. This has to be reconsidered and remedied.
Additionally, this article aims at raising awareness in the Middle East. The region is virtually absent from AI research and the political and philosophical discussions around it. This absence raises questions of our well-being in the age of ASI or even human-level AI.
There is an opportunity for the region, however. I believe AI as a field welcomes new competitors. First, its community is very open and collaborative making the outcome of all efforts done till now available at our fingertips. Moreover, the field is nowhere close to saturation as it was reborn a decade ago giving room for many breakthroughs. Furthermore, many advances can be achieved with limited resources and boils down to good talent. Take GANs, one the most influential AI developments in recent years, as an example; a single scientist brought it to life in a single day. Additionally, investing in AI isn’t only a long term investment; progress in AI already affects various aspects of our daily lives.
In light of the previous qualities, the opportunity of being a global leader of AI is in our hands, not in the hands of circumstance. However, this opportunity won’t last forever given the rapid growth in AI. What is holding us back then?
References:
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018, May). When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research. Retrieved October 15, 2021, from the arXiv database.
Schroeder, M. (2021, March). Value Theory. The Stanford Encyclopedia of Philosophy (Fall 2021 Edition). Edward N. (ed.). Retrieved October 15, 2021, from https://plato.stanford.edu/archives/fall2021/entries/value-theory/
Frank, H. & Igor, D. (2018). Nozick’s experience machine: An empirical study. Philosophical Psychology, 31:2, 278-298, DOI: 10.1080/09515089.2017.1406600
Giles, M. (2018, February). The GANfather: The man who’s given machines the gift of imagination. Retrieved from https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination/
Sijbesma, F. (2016, June). Our minds are wired to fear only short-term threats. We need to escape this trap. Retrieved from https://www.weforum.org/agenda/2016/06/how-to-thrive-with-long-term-solutions-for-the-fourth-industrial-revolution/
Gershgorn, D. (2017, July). The data that transformed AI research—and possibly the world. Retrieved from https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/
Marr, B. (2019, December). The 10 Best Examples of How AI Is Already Used In Our Everyday Life. Retrieved from https://www.forbes.com/sites/bernardmarr/2019/12/16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/?sh=4c6c54031171
Comments