Search

Will the latest staggering AI advancement turn the world upside down?

Fantastic China  | 2023-05-09 | Views:238

W020230508578692647554.jpg

'As an artificial intelligence (AI) language model, my purpose is to assist and augment human capabilities rather than replace them. While I am capable of processing and generating human-like language, my abilities are still limited and guided by the data that I was trained on," ChatGPT answered when asked whether it will surpass or replace humans. "It is unlikely that ChatGPT or any other AI language model will surpass or replace humans."

However, concerns, or even fears, from many have been hard to quell. In March, nearly four months after ChatGPT, the AI-based chatbot developed by San Francisco-based startup OpenAI, stepped into the spotlight, the nonprofit Future of Life Institute released an open letter, calling for a six-month pause in the training of AI models more powerful than GPT-4, the latest language model released by OpenAI, in order to develop new safety protocols for AI design.

The mission of the U.S.-headquartered institute, as stated on its website, is to "steer transformative technologies away from extreme, large-scale risks and towards benefiting life." The letter has so far collected more than 30,000 signatures, including from cofounders and CEOs like business magnates Elon Musk and Steve Wozniak, researchers at Meta, Google and universities, as well as executives of companies developing their own AI systems.

"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control," the letter reads. "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal." 

Undoubtedly, the latest AI developments have aroused wide discussion and reflection, not just on technology, but also on human civilization and development. Humanity seems to be facing a mixed feeling of excitement, bewilderment, hopefulness, uncertainty, panic, and much more.

An 'iPhone moment'? 

Since GPT3.5 first went live in November 2022, many companies have been racing to introduce their own AI tools, contradicting the view of pessimists that "the AI field has become a bubble." Tech giants like Google, Microsoft and Amazon have launched their own generative artificial intelligence (GAI) tools. In March, China's largest online search engine Baidu launched ERNIE Bot, referred to by some as ChatGPT's counterpart in China. In April, Wang Xiaochuan, founder and former CEO of Sogou Inc., one of China's leading Internet product and service providers, announced its newly established company Baichuan Intelligence, which aspires to become China's OpenAI.

GAI is a broad label that's used to describe any type of AI that can be used to create new text, images, video, audio, computer code or synthetic data on its own. ChatGPT falls into this category.

"We are at the iPhone moment of AI," Jensen Huang, founder and CEO of Nvidia, a leading AI computing and semiconductor company based in California, the U.S., declared as he introduced two new chips to power large language models like ChatGPT at the Nvidia's AI developer conference in March.

However, Li Di, CEO of Xiaobing, also known as Xiaoice, which spun off from Microsoft in July 2020 to become a separate company headquartered in Beijing, maintains a calmer head toward large AI models. He said he believes the new technological progress has eliminated several bottlenecks in the development of natural language processing technology—for example, that machines cannot understand people's intentions. But overstating its significance "is more or less out of consideration for the development of their respective businesses or careers," he told China Newsweek magazine.

Liu Wei, head of the Human Machine Interaction and Cognitive Engineering Laboratory at Beijing University of Posts and Telecom-munications, believes that despite AI's advantages, such as huge information storage capacity and high-speed processing, its flaws are obvious, such as equating mathematics with logic and failing to understand or convey people's emotions. 

"AI is only a programmable part of human intelligence and human intelligence is the product of the interaction between humans, machines and environmental systems," Liu told Beijing Review.

W020230508578693079527.jpg

Will it take away jobs? 

A report released by Goldman Sachs in March predicted that 300 million jobs could be affected by GAI, stating, "If GAI delivers on its promised capabilities, the labor market could face significant disruption."

The report said one quarter of all tasks performed in the U.S. and Europe could be automated by AI. In the U.S., office and administrative support positions are at the greatest risk of task replacement (46 percent), followed by legal positions (44 percent) and architecture and engineering jobs (37 percent).

ResumeBuilder.com, a resource for resume templates and career advice, surveyed 1,000 U.S. business leaders to see how many companies currently use or plan to use ChatGPT. The resulting report, released in February, said nearly half of the surveyed companies say they are using ChatGPT and 93 percent of them say they plan to expand their use of the chatbot. Those companies use it for facilitating hiring, writing code, copywriting and content creation, customer support and creating summaries of meetings or documents. According to the report, 48 percent of companies have replaced workers with ChatGPT since it became available last November.

The development of AI will inevitably lead to the extinction of some positions and the emergence of others. Mo Yu, who is in charge of the AI department at Liepin.com, a Chinese Internet recruitment company, said ChatGPT and related natural language processing technologies will further improve the ability of computers to listen, speak, read and write. As a result, professions such as customer services, teleselling, primary translation and editing, courier services and security guard will be greatly impacted.

Mo added that AI is not suitable for complex environments, major decision-making work (such as that of judges), work involving communication with people to cater to others' emotional needs (such as the care of children and seniors) and innovative work (such as that undertaken by scientists) cannot be easily replaced.

Deng Yaxi, a technical writer based in Shanghai, has been working with ChatGPT for a few months. She told Beijing Review that ChatGPT can be a very good teacher and consultant. "Questions that I previously needed to ask my programmer coworkers can now be solved by ChatGPT. It's timesaving," she said. "But I highly doubt that it will replace my position for now. Its answers are not ideal; I need to further process them to make it helpful. But it's a good writing copilot"

Ge Jianqiao, a lecturer in neuroscience at the Academy for Advanced Interdisciplinary Studies at Peking University, told Beijing Review that the fear of AI replacing human beings is "very much like people's concerns over machines replacing labor during the Industrial Revolution. It's unnecessary."

Ge has been researching brain and intelligence (in different forms). She said AI can free people from repetitive laboring so that they can have more time to create, experience and perceive and devote themselves to other things that are more likely to be meaningful to humanity. "It provides us with more of an opportunity than a challenge," she added.

Taking the computer and video gaming industry as an example, GAI has generated new opportunities for many startups in China by helping them save time and improve the quality of the artwork such as characters and backgrounds featured in their games. Liu Jun, an executive director and screenwriter of animations, said he regards AI's assistance as "a shortcut for content creators to realize what they have in their mind," which will bring about a period of explosion of content production.

He Ting, an animator who has been in the industry for a decade, said the advancement of AI is a motivator for practitioners in the industry to hone their skills. "If we don't do it, and do it quickly, we will be replaced," she told Beijing Review. "But it's a good thing for the industry to upgrade."

W020230508578693309758.jpg

Preventing risks 

In addition to concerns about job replacement, ChatGPT has evoked deeper worries. The open letter released by the Future of Life Institute represents the views of those worrying about the possible out-of-control negative impacts of new AI breakthroughs. The letter raises questions such as: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the ones employees find fulfilling? Should we develop nonhuman minds that might eventually outnumber, outsmart and replace us? Should we risk losing control of our civilization?

Countries and companies are cautious and getting prepared. The Italian Government, and organizations and businesses in the United States, Germany and Japan, have already imposed bans or limitations on the use of ChatGPT.

In early May, technology giant Samsung banned its employees from using ChatGPT and other GAI tools in the workplace, following a serious leak of top secret data while using the technology.

On April 11, the Cyberspace Administration of China, the country's top cyberspace regulator, issued a draft policy on the management of GAI services, soliciting feedback from the public. The draft puts forward suggestions on the supervision of the technology, lest it be abused. The policy's ultimate goal is for this technology to develop in a healthy way and for it to benefit human society. A final version of the document is expected to be formally released as early as the end of this year.

OpenAI states in its charter that it will actively cooperate with other research and policy institutions, and "seek to create a global community working together to address artificial general intelligence (AGI) 's global challenges." The company also publishes most of its AI research out of safety and security concerns.

Zeng Yi, a researcher with the Institute of Automation under the Chinese Academy of Sciences, signed the open letter. He shared with China Newsweek the two issues he was concerned about most. First, human society might not be ready for the potential impact technology brings; second, some of the content generated by AI big models still has biases and other harms, and many AI big models lack ethical and security frameworks during development.

Nevertheless, Zeng stressed that "AI eschatology," or expectations of the end of the human history, is not the focus of concern at this moment. "When AGI comes, humanity will lose control over civilization," he said.

According to U.S. tech giant IBM, AGI, or strong AI, describes a type of machine intelligence that could rival human intelligence and would have a self-aware consciousness that can solve problems, as well as learn and plan for the future. The term is believed to have been first coined by U.S. physicist Mark Gubrud in 1997. AGI is something that can cause more serious ethical problems and other risks.

Ge said some people in the AI industry regard AI as human's child. "Humans brought AI into the world. But these intelligences have their own growth routes which we need to respect," she said. "ChatGPT may have not yet reached this degree of development, but we need to dig deeper into what really constitutes AGI, AI and intelligence."

In explaining the path he believes AI developers should take going forward, Liu Wei said the focus should not be placed on developing AI's overall strength but rather on developing its ability to interact with humans and the systems that facilitate that interaction. "We should focus on how machines and humans can perform their own duties and promote each other. Which machine can be considered 'the best' depends on who is using it," Liu concluded.


Tags:
Share: