China looks to expand number and range of jobs for graduates
China is looking to create more job opportunities for the country’s graduates in a wide arrange of sectors, the government has said.
It is expected to create more employment opportunities for 11.79 million graduates preparing to embark on their careers this year, according to the Ministry of Human Resources and Social Security.
Nineteen new professions, including roles such as e-commerce live-streamers and intelligent connected vehicle testers, are on the horizon, according to a recent plan published by the Ministry. It said these new professions and types of work are an indication of the demand for talent and the shortage of labour.
Graduates, particularly those from the post-1995 and post-2000 generations, are drawn to these emerging sectors, which require tailored and supportive job-seeking services by Chinese universities.
However, despite the emergence of new professions and rapid development of new technology, challenges remain in matching talent with industry demands, the ministry has stated.
It estimates that the manufacturing sector faces a talent shortfall of 30 million, while the demand for elderly care nursing exceeds 10 million, with only around 300,000 positions currently filled.
In a pioneering move, two Chinese universities are introducing undergraduate programmes in elderly care service management, with 96 undergraduates set to graduate soon. The government said these graduates are highly sought-after by major elderly care institutions.
The Ministry said that efforts to create new job opportunities are vital for ensuring high-quality employment for graduates.
As of May 29, a total of 2,524 universities nationwide have launched initiatives to visit and connect with companies to expand job opportunities, resulting in nearly 3.76 million new jobs.
China publishes draft regulations on AI
China is looking to balance the advantages of exploiting generative AI to improve and encourage economic growth with the need to make sure it is used legally and responsibly.
To this end, the National Information Security Standardization Technical Committee (NISSTC) has issued draft regulations outlining security measures for generative AI service providers.
The NISSTC’s new draft regulations, entitled ‘Cybersecurity Technology – Basic Security Requirements for Generative Artificial Intelligence (AI) Service’, outlines critical security requirements for generative AI services, encompassing:
- Training data security: Ensuring the safety and integrity of data utilized to train AI models.
- Security measures: Specifies essential security measures to be implemented to effectively mitigate risks.
NISSTC said that after data collection it’s essential the collected data is analysed, and if it contains more than 5% illegal or harmful information it should not be used for training purposes.
The draft categorises ‘harmful data’ into the following risk areas:
- Extremism, in any form, including the promotion of terrorism, extremism or ethnic hatred, is particularly dangerous and must be excluded from training data.
- Obscenity and violence: Any data advocating violence, obscenity or pornography is deemed harmful and should not be included.
- Illegal content: Any content that is prohibited by law or regulation is inherently harmful and should be carefully screened out of AI training datasets.
- Discriminatory content: This refers to content that encompasses various forms of discrimination, including ethnic discrimination against specific groups, religious discrimination based on beliefs, and nationality discrimination targeting certain nationalities. Equally critical are regional discrimination based on geographic origin, gender discrimination, age discrimination, occupational discrimination, and health discrimination.
- Commercial violations: This refers to content that encompass infringements on intellectual property rights, breaches of business ethics, and the disclosure of commercial secrets.
- Infringement of legal rights: Harmful data also includes actions that infringe upon individuals’ rights and well-being. This encompasses harming the physical or mental health of others and violating portrait rights.
NISSTC commented: “Overall, while the new security requirements may pose initial challenges for generative AI service providers, they present greater opportunities for differentiation, innovation, and enhanced user trust in the long run.
“By embracing these regulations as a framework for responsible AI development, providers can position themselves for long-term success in an increasingly regulated market.”