Site icon The Choice by ESCP

Generative artificial intelligence: how do we mitigate the extreme risks?

Image of a vintage tin robot

©Josef Kubes/Shutterstock. Image of a vintage tin robot

Earlier this year, a group of artificial intelligence (AI) pioneers delivered a concise 22-word statement, cautioning that the threat to humanity posed by the rapid advancement of AI rivals that of nuclear conflict and disease. The statement, published by the Center for AI Safety, a non-profit organisation, was signed by more than 350 AI executives, researchers and engineers. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read.

The statement was prompted by calls for regulation within the AI sector, which were amplified by the introduction of ground-breaking AI initiatives by Big Tech companies, including the ChatGPT launched by Microsoft-backed OpenAI. Such developments have drawn increased attention to the potential flaws of AI, particularly in perpetuating societal biases and displacing workers.

They also serve as a wake-up call for businesses and policymakers to take action to address the challenges and extreme risks posed by AI. To explore effective measures, we spoke to Howard Zhong, assistant professor of information and operations management at ESCP.

Many AI systems, such as deep learning algorithms, operate as black boxes, making it difficult to understand how they arrive at their decisions.

Prof. Howard Zhong

Upskilling and reskilling: a path to prevent job displacement

Among the myriad concerns surrounding AI, employment stands at the forefront. Recent AI advancements have sparked fears of job automation and workforce displacement, potentially leading to economic instability across various industries. According to Goldman Sachs, generative AI has the potential to impact approximately 300 million jobs across major economies. Recently, the UK telecoms group BT announced plans to reduce its workforce by 10,000 roles through the implementation of increased digitisation and automation.

To tackle this issue, Zhong advocates for investing in upskilling and reskilling initiatives to bridge the gap between existing skills and those required by emerging technologies. These programmes would offer a lifeline to workers. Recent examples stand out. Amazon has launched an upskilling programme called “Upskill 2025” and committed $1.2 billion to provide 300,000 employees with access to education and skills training programmes, including AI-related courses. AT&T launched the “Future Ready” programme, which provides employees with online courses and certifications, helping them stay up-to-date with emerging trends. 

Our expert further suggests encouraging collaboration between humans and AI to create new job opportunities or enhance existing work — something that BT highlighted. To achieve this, Zhong says fostering a culture of innovation — and providing platforms for workers to contribute ideas and solutions that integrate AI to improve productivity — would be instrumental.

Lastly, he says businesses and educational institutions should encourage the development of entrepreneurial skills and provide support for individuals to start their own businesses. “This allows workers to create their own job opportunities and adapt to the changing job market.”  

Promoting ethical data usage and transparency

As AI technologies become integral to various industries such as telecoms, issues related to data collection, privacy and transparency demand attention. Particularly, the use of large data sets to train AI systems can introduce biases that subsequently influence decision-making, Zhong warns us:

If these data sets contain inherent biases, the result can be the perpetuation of discrimination against specific demographic groups. Consequently, social inequalities can be exacerbated, and biases may persist in fields such as hiring, lending and criminal justice.”  

Furthermore, as AI-powered technologies continue to expand, concerns about the gathering and potential misuse of personal data grow. This, Zhong says, “may pose potential threats to individuals’ privacy and enable surveillance at an unprecedented scale”.

Hence, it becomes crucial to promote responsible data collection and usage practices to protect individual privacy, prevent biases and avoid discriminatory outcomes. “Businesses should establish robust data governance practices and obtain informed consent when collecting personal data,” he says.

Furthermore, promoting transparency is vital for organisations in building trust with employees and the public, as he explains: “Many AI systems, such as deep learning algorithms, operate as black boxes, making it difficult to understand how they arrive at their decisions. This lack of transparency can impede the establishment of trust and hinder efforts to hold AI accountable for any unintended or detrimental consequences.”

The role of government and AI guardrails

While businesses play a pivotal role in navigating AI challenges, the responsibility also lies with governments to implement effective AI policies. Adapting regulations to keep pace with AI’s rapid development is a formidable task. 

To address this concern, Zhong proposes establishing regulatory sandboxes or pilot programmes to better understand emerging AI technologies. “These programmes allow limited testing of new AI systems in a controlled environment, enabling regulators to observe their functioning, assess risks and refine regulations accordingly,” he explains.

Providing grants and funding opportunities for AI research helps create a level playing field and encourages innovation from diverse sources.

Howard Zhong

Preventing monopolistic control

Another significant concern revolves around the concentration of AI power within a few dominant companies, such as OpenAI or Google, which has developed a conversational AI chatbot called Bard. To foster competition and prevent such control, our expert suggests several approaches.

First, promoting the use of open standards and ensuring compatibility between different AI systems can foster greater competition. “This approach prevents one company from having exclusive control over a particular technology or dataset, and enables users to switch between different AI services more easily,” says Zhong.  

Additionally, he recommends investing in research and development (R&D) efforts to support smaller companies. “Providing grants and funding opportunities for AI research helps create a level playing field and encourages innovation from diverse sources.” 

Furthermore, he says regulators should carefully review and scrutinise mergers and acquisitions in the AI industry, especially when they involve dominant players: “This helps prevent the consolidation of power and encourages healthy competition.”

The importance of international collaboration

Given the global nature of AI challenges, international collaboration and agreements are crucial to create a unified approach to managing AI risks. To achieve this, our professor suggests encouraging the United Nations to create a specialised agency or commission responsible for overseeing and regulating AI technologies worldwide. “This agency could facilitate discussions, establish ethical principles, and foster collaboration among nations to address AI risks.” 

He says lessons can be drawn from other technologies, such as nuclear power. Efforts such as the establishment of international treaties, robust safety regulations, and technical standards have helped mitigate the risks, resulting in safer nuclear power usage globally. Likewise, the establishment of robust regulatory frameworks, biosafety protocols, and guidelines for ethical conduct have played crucial roles in minimising the risks associated with the use of biotechnology. 

Moreover, promoting responsible data sharing for AI research through international agreements is essential, our expert points out: “This could include mechanisms to enable the sharing of anonymized datasets while safeguarding individual privacy and sensitive information.”

The increasing prominence of AI in our lives demands proactive measures to address its potential risks and ensure responsible development, as the statement by the Center for AI Safety reminds us. By focusing on upskilling, responsible data usage, transparency and international cooperation, it is possible to steer AI’s path towards a brighter and safer future for humanity.

Exit mobile version