Earlier this year, a group of artificial intelligence (AI) pioneers delivered a concise 22-word statement, cautioning that the threat to humanity posed by the rapid advancement of AI rivals that of nuclear conflict and disease. The statement, published by the Center for AI Safety, a non-profit organisation, was signed by more than 350 AI executives, researchers and engineers. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read.
The statement was prompted by calls for regulation within the AI sector, which were amplified by the introduction of ground-breaking AI initiatives by Big Tech companies, including the ChatGPT launched by Microsoft-backed OpenAI. Such developments have drawn increased attention to the potential flaws of AI, particularly in perpetuating societal biases and displacing workers.
They also serve as a wake-up call for businesses and policymakers to take action to address the challenges and extreme risks posed by AI. To explore effective measures, we spoke to Howard Zhong, assistant professor of information and operations management at ESCP.
Many AI systems, such as deep learning algorithms, operate as black boxes, making it difficult to understand how they arrive at their decisions.Prof. Howard Zhong
Upskilling and reskilling: a path to prevent job displacement
Among the myriad concerns surrounding AI, employment stands at the forefront. Recent AI advancements have sparked fears of job automation and workforce displacement, potentially leading to economic instability across various industries. According to Goldman Sachs, generative AI has the potential to impact approximately 300 million jobs across major economies. Recently, the UK telecoms group BT announced plans to reduce its workforce by 10,000 roles through the implementation of increased digitisation and automation.
To tackle this issue, Zhong advocates for investing in upskilling and reskilling initiatives to bridge the gap between existing skills and those required by emerging technologies. These programmes would offer a lifeline to workers. Recent examples stand out. Amazon has launched an upskilling programme called “Upskill 2025” and committed $1.2 billion to provide 300,000 employees with access to education and skills training programmes, including AI-related courses. AT&T launched the “Future Ready” programme, which provides employees with online courses and certifications, helping them stay up-to-date with emerging trends.
Our expert further suggests encouraging collaboration between humans and AI to create new job opportunities or enhance existing work — something that BT highlighted. To achieve this, Zhong says fostering a culture of innovation — and providing platforms for workers to contribute ideas and solutions that integrate AI to improve productivity — would be instrumental.
Lastly, he says businesses and educational institutions should encourage the development of entrepreneurial skills and provide support for individuals to start their own businesses. “This allows workers to create their own job opportunities and adapt to the changing job market.”
Promoting ethical data usage and transparency
As AI technologies become integral to various industries such as telecoms, issues related to data collection, privacy and transparency demand attention. Particularly, the use of large data sets to train AI systems can introduce biases that subsequently influence decision-making, Zhong warns us:
“If these data sets contain inherent biases, the result can be the perpetuation of discrimination against specific demographic groups. Consequently, social inequalities can be exacerbated, and biases may persist in fields such as hiring, lending and criminal justice.”
Furthermore, as AI-powered technologies continue to expand, concerns about the gathering and potential misuse of personal data grow. This, Zhong says, “may pose potential threats to individuals’ privacy and enable surveillance at an unprecedented scale”.
Hence, it becomes crucial to promote responsible data collection and usage practices to protect individual privacy, prevent biases and avoid discriminatory outcomes. “Businesses should establish robust data governance practices and obtain informed consent when collecting personal data,” he says.
Furthermore, promoting transparency is vital for organisations in building trust with employees and the public, as he explains: “Many AI systems, such as deep learning algorithms, operate as black boxes, making it difficult to understand how they arrive at their decisions. This lack of transparency can impede the establishment of trust and hinder efforts to hold AI accountable for any unintended or detrimental consequences.”
The role of government and AI guardrails
While businesses play a pivotal role in navigating AI challenges, the responsibility also lies with governments to implement effective AI policies. Adapting regulations to keep pace with AI’s rapid development is a formidable task.
To address this concern, Zhong proposes establishing regulatory sandboxes or pilot programmes to better understand emerging AI technologies. “These programmes allow limited testing of new AI systems in a controlled environment, enabling regulators to observe their functioning, assess risks and refine regulations accordingly,” he explains.
Providing grants and funding opportunities for AI research helps create a level playing field and encourages innovation from diverse sources.Howard Zhong
Preventing monopolistic control
Another significant concern revolves around the concentration of AI power within a few dominant companies, such as OpenAI or Google, which has developed a conversational AI chatbot called Bard. To foster competition and prevent such control, our expert suggests several approaches.
First, promoting the use of open standards and ensuring compatibility between different AI systems can foster greater competition. “This approach prevents one company from having exclusive control over a particular technology or dataset, and enables users to switch between different AI services more easily,” says Zhong.
Additionally, he recommends investing in research and development (R&D) efforts to support smaller companies. “Providing grants and funding opportunities for AI research helps create a level playing field and encourages innovation from diverse sources.”
Furthermore, he says regulators should carefully review and scrutinise mergers and acquisitions in the AI industry, especially when they involve dominant players: “This helps prevent the consolidation of power and encourages healthy competition.”
The importance of international collaboration
Given the global nature of AI challenges, international collaboration and agreements are crucial to create a unified approach to managing AI risks. To achieve this, our professor suggests encouraging the United Nations to create a specialised agency or commission responsible for overseeing and regulating AI technologies worldwide. “This agency could facilitate discussions, establish ethical principles, and foster collaboration among nations to address AI risks.”
He says lessons can be drawn from other technologies, such as nuclear power. Efforts such as the establishment of international treaties, robust safety regulations, and technical standards have helped mitigate the risks, resulting in safer nuclear power usage globally. Likewise, the establishment of robust regulatory frameworks, biosafety protocols, and guidelines for ethical conduct have played crucial roles in minimising the risks associated with the use of biotechnology.
Moreover, promoting responsible data sharing for AI research through international agreements is essential, our expert points out: “This could include mechanisms to enable the sharing of anonymized datasets while safeguarding individual privacy and sensitive information.”
The increasing prominence of AI in our lives demands proactive measures to address its potential risks and ensure responsible development, as the statement by the Center for AI Safety reminds us. By focusing on upskilling, responsible data usage, transparency and international cooperation, it is possible to steer AI’s path towards a brighter and safer future for humanity.
License and Republishing
The Choice - Republishing rules
We publish under a Creative Commons license with the following characteristics Attribution/Sharealike.
- You may not make any changes to the articles published on our site, except for dates, locations (according to the news, if necessary), and your editorial policy. The content must be reproduced and represented by the licensee as published by The Choice, without any cuts, additions, insertions, reductions, alterations or any other modifications.If changes are planned in the text, they must be made in agreement with the author before publication.
- Please make sure to cite the authors of the articles, ideally at the beginning of your republication.
- It is mandatory to cite The Choice and include a link to its homepage or the URL of thearticle. Insertion of The Choice’s logo is highly recommended.
- The sale of our articles in a separate way, in their entirety or in extracts, is not allowed , but you can publish them on pages including advertisements.
- Please request permission before republishing any of the images or pictures contained in our articles. Some of them are not available for republishing without authorization and payment. Please check the terms available in the image caption. However, it is possible to remove images or pictures used by The Choice or replace them with your own.
- Systematic and/or complete republication of the articles and content available on The Choice is prohibited.
- Republishing The Choice articles on a site whose access is entirely available by payment or by subscription is prohibited.
- For websites where access to digital content is restricted by a paywall, republication of The Choice articles, in their entirety, must be on the open access portion of those sites.
- The Choice reserves the right to enter into separate written agreements for the republication of its articles, under the non-exclusive Creative Commons licenses and with the permission of the authors. Please contact The Choice if you are interested at email@example.com.
Extracts: It is recommended that after republishing the first few lines or a paragraph of an article, you indicate "The entire article is available on ESCP’s media, The Choice" with a link to the article.
Citations: Citations of articles written by authors from The Choice should include a link to the URL of the authors’ article.
Translations: Translations may be considered modifications under The Choice's Creative Commons license, therefore these are not permitted without the approval of the article's author.
Modifications: Modifications are not permitted under the Creative Commons license of The Choice. However, authors may be contacted for authorization, prior to any publication, where a modification is planned. Without express consent, The Choice is not bound by any changes made to its content when republished.
Authorized connections / copyright assignment forms: Their use is not necessary as long as the republishing rules of this article are respected.
Print: The Choice articles can be republished according to the rules mentioned above, without the need to include the view counter and links in a printed version.
If you choose this option, please send an image of the republished article to The Choice team so that the author can review it.
Podcasts and videos: Videos and podcasts whose copyrights belong to The Choice are also under a Creative Commons license. Therefore, the same republishing rules apply to them.