On 14 February 2023, Bing, aka Chat GPT, made a passionate declaration of love to New York Times journalist Kevin Roose, who had decided to spend Valentine’s Day exploring the possibilities of the latest version of this generative artificial intelligence.
Since that day, there have been many exchanges between Bing and humans, thousands of emails and essays written by high school students with its help, and, most recently, a psychodrama within Open AI itself, Bing’s parent company. In November 2023, in the space of five days, Open AI came close to losing 90% of its workforce and filing for bankruptcy before its ousted CEO was reinstated by a board overly concerned about the future consequences of AI research and development.
Artificial intelligence research is not new. It began in the 1950s with the first work on simulating cognitive processes and machine learning. However, its spectacular rise by 2023, its potential for self-acceleration, and the competition between the major players in this industry are causes for great concern. This raises the question of the limits of unregulated artificial intelligence. GPT 4 can’t think; it relies on an algorithm that generates answers based on available data, with a margin of error of around 36%. However, we know that the acceleration of its development and that of other forms of AI will have tangible consequences for jobs and skills management. According to initial estimates, this development will lead to the automation of around 50% of existing jobs in the coming decades and generate 85% of new jobs (skills) by 2030 (IFTF, 2017).
Numerous debates have emerged between advocates of the benefits of AI and experts warning of the risk that its uncontrolled development would pose to the survival of humanity.
Diverging opinions on AI regulation
In this context of uncertainty about our future and following the example of a letter signed on 28 March 2023 by world-renowned leaders and key players in the AI industry officially calling for a moratorium on AI research, numerous debates have emerged between advocates of the benefits of AI and experts warning of the risk that its uncontrolled development would pose to the survival of humanity.
The recent conflict at Open AI is the ultimate arena for this debate, which, because of the organisation in which it took place and the expertise of the parties involved, only serves to accentuate the sense of uncertainty, lack of predictability and lack of control over the consequences of increased, unregulated AI development.
How else are we to interpret the fact that three members of the board of directors of this company, all public advocates of the open, humanistic development of AI, allied to oust the company’s CEO, who, having founded the company on this same humanistic vision, decides to make a commercial shift in strategy? And how are we to understand the opposite fact that 90% of the employees of this company threatened to resign as a result? Are they thinking in the abstract about the limited possibilities for future innovation? Concretely? Financial? Short-sighted? Or long-term?
If this is a difficult question to answer, it’s because the benefits of AI are tangible, promising significant efficiency gains in key areas to improve our lives and the future of our planet. However, it is also undeniable that many experts share the same uncertainty about the future of this development. If we can’t stop this acceleration, we need to regulate it, which means opening a series of negotiations at the highest level.
If this is a difficult question to answer, it’s because the benefits of AI are tangible, promising significant efficiency gains in key areas to improve our lives and the future of our planet.
The complexities behind AI regulation
What Muzafer Sherif achieved in his social psychology laboratory in 1935 is hardly applicable to our problem. However, this experimental psycho-sociologist discovered how a group of individuals creates a norm when faced with an unprecedented situation, i.e. one without reference to anything known or experienced before. To do this, he used the autokinetic effect, an optical illusion to which he subjected his subjects by asking them to estimate the distance travelled by a light that was completely motionless. After numerous repetitions, in which they were forced to say their estimates aloud, these subjects ended up seeing the same thing in terms of the direction and distance travelled by the motionless light. Sherif concluded that the group, through a process of mutual influence, had tacitly established a common norm that enabled them to make sense of this nonsensical situation: sure, the light wasn’t moving, but from then on, they all saw it moving in the same way, without any prior consultation, discussion or debate, simply by saying out loud their personal estimates and listening to each other.
While the two situations are similar in terms of the need for regulation to make sense of an uncertain situation, the development of AI is far more complex. It depends on numerous factors that are independent of the will of certain technology leaders, politicians, intellectuals, and scientists to regulate its development for the good of humanity.
The inherent complexities of AI can be grouped into six factors:
- The first factor, at a general level, is the different perceptions of AI-related issues by all parties involved in future negotiations on its regulation. Loss of jobs, manipulation of public opinion, invasion of privacy and financial gain may be different priorities for regulators.
- The second factor is the technical complexity of the many different types of AI, each of which requires advanced expertise that regulators must be able to consider.
- The third factor is the different applications of AI, such as health, the environment, communications, education, transport, or finance, each of which poses specific regulatory challenges.
- Fourth, the acceleration of innovation, thanks to funding and the diversity of its origins, is rendering existing regulations obsolete and imposing a pace that future regulatory consultations cannot keep up with.
- The fifth factor is the ethical issue inherent in the development of AI. Transparency, respect for privacy, responsibility, inclusion, and accessibility for all are values with complex ethical implications that are not shared or prioritized by all parties involved in regulatory negotiations.
- The sixth factor is the geopolitical dimension: AI is a global field, and international competition to master it makes it difficult to achieve the cooperation needed to regulate it.
Despite these complexities, there is an urgent need for regulation to ensure its ethical and beneficial use for humanity. This requires a common diagnosis of the urgency of the situation and the emergence of cooperation between governments, researchers, intellectuals, civil society, and companies to jointly establish an ethical framework for its development.
A message of hope from Europe
On 8 December 2023, the European Union sent a message of hope to the world by signing the AI Act, an agreement to regulate the development of artificial intelligence. Thierry Breton, the European Commissioner for the Internal Market, who initiated these negotiations, declares:
This is historic, … the European Union is the first continent to establish clear rules for the use of AI. The AI law is much more than a set of rules, it’s a launching pad for European start-ups and researchers in the global race for artificial intelligence.
Thierry Breton, the European Commissioner for the Internal Market
However, on 29 November, the situation seemed to be deadlocked on key points such as the regulation of facial recognition or the regulation of Artificial Generative Intelligence (ex. Chat GPT). European companies consider the regulation of AGI premature and harmful to innovation. These disagreements were mainly linked to the reluctance of certain countries, including France, to adopt strong regulations in order not to lose ground to China and the United States.
If, nine days later, the AI law is ratified and can boast the signatures of all the parties involved in the negotiations, it’s because the existence of such an agreement has a strong symbolic value at two levels: at the level of international competition, since Europe was not a forerunner in the field of AI and can now position itself as a regulator, and at the level of European identity, since the AI law is tangible proof of the EU’s motto of unity in diversity.
Even if it is only a compromise for the time being, a first step in the search for meaning or predictability in our future, the signing and existence of such a regulation proves that a shared identity can become a source of motivation to overcome divergent perceptions and positions on the stakes and urgency of the situation.
What remains to be done is to build such a common perception and identification gradually and decisively on a planetary scale.