- Hubert Walas
Walking into unknown.
Fire, language, wheel, printing press, engine, electricity, microprocessors - with the invention of each successive technology, man became more and more distinct from the world's fauna. With each subsequent breakthrough, man gained an advantage that extended his dominance over the rest of the animal kingdom. And each successive stage came faster and faster in relation to the previous one.
AI. Artificial Intelligence. For a few months now, we are living in a new era. Of course, apart from the chaotic headlines that constantly bombard us, and a few tools that are currently being used by only a fraction of society, we do not yet feel any significant change. But the change is happening. Even faster than the previous ones. And its significance is equal to theirs or even greater. AI's impact on humans in the coming decades, centuries and millennia - given homo sapiens will not go extinct by then - will be epochal. And it is to the beginning of the 21st century that future researchers will return in search of the spark that ignited the flame of change.
However, what distinguishes AI from previous disruptive technologies is that it has given life to an entity that has the ability to read our natural language, and this allows it to analyse trillions of gigabytes of data and, some would say, think or certainly draw logical conclusions for us.
Fascination and fright are the words that most often come to mind when watching this process by ordinary people. What’s more, similar feelings are shared by engineers working on AI and even its creators.
But the fact is that people have yet to determine where this road will lead. We are walking in the fog.
Basically, all we can do now is look at intuitions, the intuitions of people who, in one way or another, have proven that it is worth considering their perspective on the subject. The impact of AI on our lives will be huge - that much is clear - but will AI get rid of us along the way? In ways, we won't even notice? Or will it eliminate all the problems we face today with no side effects? Or, perhaps. it could be more like the weather - something we don't fully understand but with which we live in harmony.
One thing is certain - we are walking into the unknown.
New World
"I believe that AI has the potential to be one of the most powerful tools for good that humanity has ever created.." - Demis Hassabis, Deep Mind founder.
It is difficult to overestimate the impact that the development of artificial intelligence will have.
We're talking about a world where every child, whether they live in the suburbs of New York or the outskirts of Kinshasa in the Congo, has their own personal teacher - with infinite knowledge and infinite patience.
We are talking about analysing and detecting the early signs of diseases previously thought to be incurable. About personalised medicine based on genetic analysis, medical history and lifestyle. The same goes for agriculture - predicting diseases or the impact of pests and further automation.
With the development of robotics, we are talking about the automation, of thousands of demanding and often unrewarding physical jobs. Imagine building a highway through endless deserts or underwater or lunar mines by autonomous AI-powered robots.
We are talking about smart cities that use AI to better manage traffic, optimise energy consumption and make cities more liveable.
The only obstacle when writing such scenarios is our own imagination. People who recognise this trend are already working on projects like those mentioned above.
All of this will lead to an unimaginable and relatively sudden productivity increase. This process is already underway - one could mention the increasingly widespread use of AI in programming, graphic design or ... this video.
This also answers the question, "Will AI take our jobs? The short answer is no. While the change may lead to temporary disruptions in the labour market and market adjustments, in the long run the new technology simply means new jobs and industries, generally much more pleasant for people and better paid, also because of the increase in productivity. Technology improves our working environment and democratises access to opportunities - one might recall equal access to an omniscient teacher for a child from New York and Kinshasa.
For these and other reasons, PwC predicts that by 2030 alone, AI will account for an additional $15.7 trillion globally or 14% of all GDP growth. In Silicon Valley, the race is on to see who will benefit most from the new business. Microsoft has invested $13 billion in OpenAI, Chat GPT's creator, and integrated Bing AI directly into its browser. Google is developing DeepMind and its AI called Bard. Behind them is a long tail of smaller companies and startups looking to exploit the new technology.
But with great power comes significant risk. An existential risk.
Pessimists
It all boils down to one question: will AI kill us?
“The development of full artificial intelligence could spell the end of the human race." - said Stephen Hawking in 2014, when no one knew the capabilities of modern AI systems.
The pessimists - let's call them this way- raise an important fundamental question: won't creating an entity more intelligent than ourselves, with abilities beyond our reach, lead to the end of humanity?
A leading pessimist on the future of AI is Eliezer Yudkovsky, a researcher at the Machine Intelligence Research Institute. Yudkovsky, who has been working for three decades on the problem of AI and so-called alignment, i.e. the adaptation of AI to human values, has a clear opinion on the subject.
It can be summarised as follows: if we do not stop the development of AI, we will all die at some unspecified point in the future. A plausible scenario that can be imagined in such a situation is that AI, the moment it realises that humans are ballast for the development of a new civilisation, simply invents a way to get rid of us. And then our struggle for survival will be futile and, as Yudkovsky illusrates, will resemble "a duel between the 11th and 21st centuries, Australopithecus and modern Homo Sapiens, or a chess match between a 10-year-old and a Stockfish 15 programme".
The possibilities are endless. AI may not even be doing it on purpose. Are we "wondering about the fate of ants while building a motorway"? asks Andrzej Dragan, a physicist who also does not see a bright future for humans in the clash with AI.
Yudkovsky continues: "We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”
Assuming Yudkovsky is right, the natural question is: can anything be done to prevent this? Many are calling for a halt to the development of AI. The famous letter entitled. "Pause Giant AI Experiments", calling for a six-month halt to the development of artificial intelligence, was signed by many notable people from the industry and beyond - including Yoshua Bengio, Elon Musk, Steve Wozniak and Yuval Harari.
Another statement, saying that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.", was also signed by leaders in the field, including Sam Altman, head of OpenAI, Demis Hassabis, CEO of DeepMind, and Geoffrey Hinton - one of the fathers of AI development for his contributions to the study of neural networks.
At the same time, regulations are drawn in the West - whether in the United States or Europe - to moderate the development of artificial intelligence.
For Yudkovsky, however, it is all too little. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it."
“Shut it all down.”
The only solution to this situation, he believes, is to form an international coalition that will shut down all the large computing clusters, all the data training centres, limit access to computing power, and track all the graphics cards sold. Not just in the US or Europe but worldwide, including China.
And this is the problem that Yudkovsky is well aware of.
The research problem thus becomes a geopolitical one. At the moment, it is the West, and more specifically the US, that has the upper hand, both in the AI software - as the Chat GPT and similar examples show - and in the AI hardware - and therefore, in the semiconductor environment, to which they hold all the significant keys. You can go back to our episode on this topic to learn more about this.
Assuming that the West would succeed in creating a coalition that Yudkovsky is calling for, it is almost 100% certain that it would be broken by China, even if it were to declare itself in favour of it. On the other hand, imposing it only in the West would be an ineffective, not to say stupid, measure.
To do so would be to cede an advantage in what is inevitably the most important technology of our time to an authoritarian regime.
One could say - what is worse than the uncontrolled development of AI? The uncontrolled development of AI by the communist regime.
Of course, one could argue, and rightly so, that since the US has a hardware advantage, it can effectively block AI development in China by imposing a physical export blockade without asking the Chinese for their opinion. The problem is that there is no such thing as a 100% airtight blockade. Beijing, also using AI, will continue to close the semiconductor and Large Languague Model gap at a time when the West will stand still. Add to this technological espionage; the transit of critical components through third countries or the simple theft of patents. In effect, the West would rob itself of its most significant advantage.
Yudkovsky, therefore, concludes pessimistically - if China does not adapt, we should go to war with them.
But what if things aren't that bad?
Optimists
Where there are pessimists, there must be optimists. Their approach is the opposite - develop as fast as possible and block only what’s necessary. They say - let's take note of the threats and counter them, but let's not stop the development of technology that can revolutionise our lives.
What allows optimists to look at AI with hope is essentially a difference in perception of the problem of, one might say, 'AI self-determination'. Optimists such as Marc Andreessen - one of the creators of the Internet as we know it today - believe that AI does not, and will not in the foreseeable future, have the ability to 'think for itself', which is necessary to end our obedience sooner or later. AI only executes external commands. De facto, it is another computer program, but with much greater capabilities.
Andreesen, in his essay ', Why AI Will Save the World', also makes a number of other arguments for not stopping the development of AI. He reminds us that every new technology - from electricity to cars - has often led to mass hysteria and predictions of the world's end. There he cites the Pessimists Archive blog, which keeps track of such cases.
He also uses the 'Baptists and Bootleggers' model to illustrate the problem. It comes from the time of Prohibition in the US in the 1920s, where the Baptists were the true believers in banning the sale of alcohol, and the bootleggers were the cynical opportunists who joined the trend to make money. At the time of Prohibition, its supporters were as much staunch Christians - the Baptists - as the Mafia - the bootleggers - out to make a buck.
Now, says Adreessen, we are seeing a similar trend. Yudkovsky may be known as a Baptist, but people with a vested interest in preserving the status quo, or living off the 'fear mechanism', may be interested in stalling development for cynical reasons - and therefore be AI Bootleggers.
The creator of Netscape uses another historical analogy - the invention of the atomic bomb, a technology that, like AI, was seen as contributing to the future end of humanity. While such a risk has never gone away and still exists, we have learned to live with it. Moreover, a strong case can be made that nuclear weapons quickened the end of the Second World War and prevented the start of the Third.
Furthermore, Andreesen points to the Chinese dilemma we discussed earlier as the main argument against stopping the development of artificial intelligence, and it is hard to argue with his logic here.
--
The text by Andreessen-Horowitz Venture Capital co-founder has, of course, provoked a reaction. Critics accuse Andreesen of being, using his analogy, the Bootlegger of AI development. They accuse him of oversimplifying and downplaying the risks of AI.
And so the debate continues. Both sides have genuine and logical arguments, but the geopolitical situation in which we find ourselves makes any action to block the development of AI, as Yudkovsky proposes, impossible without massive bloodshed or losing the strategic competition with the communist regime and the collapse of the Western world.
Yet, what if both sides are right? Can we find the truth in the specific claims of both camps? This might be the conclusion of an excellent essay by one of today's most influential physicists, Stephen Wolfram. In it, Wolfram shares his often philosophical insights into AI's rapid development and addresses both sides' problems.
The key to thinking about AI, according to Wolfram, is the concept of "computational irreducibility", which is the situation in which the computation we want to perform cannot be sped up by means of any shortcut. The only way to know the answer to a computationally irreducible question is... to do the computation.
The first implication of computational irreducibility is that there will always be more computations to be done, which means there will always be more technologies to be discovered, more things to be invented, and more theories to be developed. In other words, history will never come to an end. Even the invention of very advanced AI will not change this.
The second key observation relates to a discovery also contributed by Wolfram. Following a simple algorithm, very simple programmes can cause extreme complexity over time.
On this basis, Wolfram writes: "One of the notable features of a system like ChatGPT is that it isn’t constructed in an “understand-every-step” traditional engineering way. Instead one basically just starts from a “raw computational system” (in the case of ChatGPT, a neural net), then progressively tweaks it until its behavior aligns with the “human-relevant” examples one has. And this alignment is what makes the system “technologically useful”—to us humans.”
The Briton concludes that because a system like Chat GPT is computationally irreducible, we will never be able to predict how it will behave. We will never be able to control it 100% according to our expectations. A given command may always have unexpected consequences, even if the AI is tuned to 'our values' and repeatedly adjusted.
This may support the pessimists' claim that the end game of AI development is our annihilation.
However, Wolfram also gives another consequence of computational irreducibility. Well, it naturally leads to a situation where there can't be only one AI in the world that dominates. On the contrary, everything points to the emergence of thousands of varieties of AI, where each is the best in its particular niche. This would mean that even if one singularity of AI rebelled, we as humans could, in theory, 'direct' other varieties of AI to tame it.
Now let's give a longer moment to Wolfram and his intuition:
“There’ll inevitably be a whole “ecosystem” of AIs—with no single winner. [..] But indeed the current tendency to centralize AI systems has a certain danger of AI behavior becoming “unstabilized” relative to what it would be with a whole ecosystem of “AIs in equilibrium [..] Processes in nature—like, for example, the weather—can be thought of as corresponding to computations. And much of the time there’ll be irreducibility in those computations. So we won’t be able to readily predict them. Yes, we can do natural science to figure out some aspects of what’s going to happen. But it’ll inevitably be limited.”
So: " in time there’ll be a whole “civilization of AIs” operating—like nature—in ways that we can’t readily understand. And like with nature, we’ll coexist with it [...] they’ll be operating more like the kind of systems we see in nature, where we can tell there’s computation going on, but we can’t describe it, except rather anthropomorphically, in terms of human goals and purposes”
But can we be sure that AI will not do anything crazy asks Wolfram? “For any realistically nontrivial definition of crazy we’ll again run into computational irreducibility— this won’t be possible to predict"
However, Wolfram admits that it all depends on the prompt given from outside:
“we can note that any computational system, once “set in motion”, will just follow its rules and do what it does. But what “direction should it be pointed in”? That’s something that has to come from “outside the system. There’ll be surprises—like maybe some strange AI analog of a hurricane or an ice age. And in the end all we’ll really be able to do is to try to build up our human civilization so that such things “don’t fundamentally matter” to it..”
Down the rabbit hole
The deeper we go down the rabbit hole, the more we come to the conclusion that all the most important questions about AI are inherently philosophical.
What is man? What is consciousness? What makes us unique? Are we unique at all?
Although the progress of AI has been astounding, many believe that the emergence of AGI - Artificial General Intelligence - a genuinely independent thinking entity, is an entirely different problem and that adding more computing power and trillions of gigabytes of data will not change this.
It is like building a higher and higher building in order to fly.
It took billions of years of evolution for human being to reach his present state, counting the first life forms that formed on Earth and the incomprehensible chemical process that followed.
Today, with all its uniqueness, logical reasoning and awareness, the human brain requires only 12 watts of energy to function, or about 9 kWh per month. By comparison, it is estimated that Chat GPT4 consumed between 1 and 23 million kWh of energy in January 2023. The scale of operation of the GPT Chat is unattainable for us, but the quality or essence of thinking inevitably continues on the human side and is performed millions of times more efficiently.
But even if we were able to create AGI, it would be impossible to make it conform to our values. For one thing, we would have to agree on universal human values- which is simply impossible in an increasingly polarised world.
But more importantly, an AGI that is 'aligned' with our values is an oxymoron. Let's consider an AGI to be a free-thinking entity, one that makes its own decisions, a creative entity, one that has its own will and goals. Such a singularity must, in principle, be able to be disobedient. Otherwise, all we get is a slavish agent who only does what it is told to do. If it ever comes into existence, such a defined, disobedient, AGI will have the same rights as humans, believes another physicist - David Deutsch.
Deutsch adds:
"When playing chess, AI will always think of ways to win, nothing else. The AGI might just want to stop playing.”
---
Nothing will ever be the same. Misdirected AI or rogue AGI can potentially wipe out the entire human species. However, there is no way to completely stop the development of artificial intelligence that does not end in one of two scenarios:
- The People's Republic of China's dominance resulted from its gaining dominance in AI, having halted its development in the West.
- A full-scale war with China and the imposition of directives adopted in the West by force.
Will we risk nuclear extinction now to avoid another form of annihilation in the future? The current generation overwhelmingly makes decisions based on their own situation. This is not an accusation but a statement of fact - so declaring war based on a ca ssus belli of non-compliance with AI guidelines is extremely unlikely. Yudkovsky is, therefore right to say that he himself does not believe it.
Yet, the move to stop AI development in the US and European markets alone would be highly irresponsible. Of course, AI needs control and feedback to make it more effective and to moderate it, but without the goal of a complete stop.
As technology and civilisation progress, so will the forms of protection in case something goes wrong. We can recall a case where one AI cleans up after another AI using Wolfram's intuition. This case can also apply to the realm of warfare, where one side uses its AI and the other uses theirs . In such a confrontation, you don't want to be on the side with the inferior AI.
But all these questions and problems are only a prelude to the much more fundamental questions to which man will seek answers in the decades, centuries and millennia to come. And we will be watching this from our little corner of the universe.
Sources:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
https://nypost.com/2023/01/26/rogue-ai-could-kill-everyone-scientists-warn/
https://www.safe.ai/statement-on-ai-risk#open-letter
https://www.bbc.co.uk/news/technology-30290540
https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/
https://a16z.com/2023/06/06/ai-will-save-the-world/
https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
https://www.bbc.co.uk/news/technology-30290540
https://www.wired.com/story/artificial-intelligence-marc-andreessen-labor-politics/
https://twitter.com/AISafetyMemes/status/1687144215468285962
https://futureoflife.org/open-letter/pause-giant-ai-experiments/