Jiang Xiaojuan: AI's Logic Must Not Prevail
Former Deputy Secretary-General of the State Council lays out a framework for AI governance that puts social consent rather than technological momentum at its center
AI is accelerating its penetration into every sector of the economy, inevitably disrupting established industry structures. At the same time, a growing number of Chinese scholars and former officials have turned their attention to a pressing question: how to slow AI’s short-term displacement of workers, and how governments should manage the relationship between governance and technology. Jiang Xiaojuan, former Deputy Secretary-General of the State Council, is one such voice. Rather than embracing uncritical technological optimism, she has placed greater emphasis on the negative shocks AI may bring in the near term.
Jiang served as Deputy Secretary-General of the Chinese State Council between 2011 and 2018, one of the highest positions in China's policy-drafting. In this role, she was directly involved in formulating and implementing major economic policies. Prior to her government service, she established herself as a leading academic voice on industrial economics and development policy at the Chinese Academy of Social Sciences. After leaving the State Council in 2018, she returned to academia as Dean of the School of Public Policy and Management at Tsinghua University, a position she held until 2022. Now, she’s a Professor at the University of Chinese Academy of Social Sciences.
At this week’s Boao Forum, she publicly called for caution toward AI applications designed purely to replace human labor. As she pointed out: “In the past, technological progress typically created far more new jobs than it eliminated. But since the 1980s, that trend has been clearly slowing down.”
She also stressed that what is technically “rational”(合理) should not be equated with what is socially “desirable.”(合意) Using human emotion as an example, she asked: “We experience a full range of emotions — joy, anger, sorrow, happiness. Technology could make you feel only happiness. But is that really what we want? Science used to be about discovering the laws of nature. Now we are creating technologies that do not exist in nature, technologies that can alter how we live, how we perceive and reproduce, and even the very structure of our society — do we truly consent to this?” She argued that genuine social consent can only be established through thorough public debate. When technology begins to threaten public safety and personal privacy, the government must intervene decisively rather than leaving the outcome to market forces alone.
She urged the government to pay greater attention to “the people being replaced.” She cited the example of a city that had attempted to develop a device to replace workers performing the most basic manual labor. The initiative not only required significant R&D investment but also subsidies for institutions to purchase the machines, making the overall cost of using the machines higher than employing human workers. Moreover, the quality of work performed by the machines was far inferior, yielding no real practical benefit. The government had supported the project simply because it fell under the banner of “new technology” and “new industry.” Yet those being displaced were precisely the lowest-income workers in the city — people whose basic livelihoods would be severely threatened if they lost their jobs. She therefore emphasized that AI deployment cannot be left entirely to the market. While promoting technological development, the government must give full consideration to its impact on vulnerable groups.
Recently, she delivered a speech at Southwest University of Political Science and Law, arguing that the answer cannot be left to tech companies alone. In a recent address, she presented her view on AI for good, calling on the social sciences to define the benchmarks for AI's benefits and harms. Thanks to her kind authorization, I can present the English transcript.
Jiang Xiaojuan on AI for Good: What Is “Good,” How to Achieve It, and Who Should Act
I am very glad to be here at Southwest University of Political Science and Law to exchange ideas with all of you. I believe everyone can appreciate that there is already a high level of attention and consensus on digital development issues, and that governance challenges have become increasingly prominent. Today, I would like to share some preliminary thoughts on the theme: AI for Good: What Is “Good,” How to Achieve It, and Who Should Act.
What Is “Good”: A Social Science Perspective
For a long time, there have been extensive discussions about AI for good, and there is a fairly high degree of consensus at the conceptual level. For instance, from UNESCO’s Preliminary Draft Report of COMEST on Robotics Ethics in 2016 to the Paris AI Action Summit in 2025, there has been strong consensus on AI governance principles. Concepts such as safety, transparency, non-discrimination, explainability, traceability, fairness and justice, inclusiveness and openness, respect for privacy, benefit-sharing, human-centeredness, and human control have been repeatedly discussed. However, discussions on how to realize these ideals and who should implement them to put “good” into practice have been relatively insufficient. These discussions have mainly been carried out by the companies involved and related technical communities within the framework of “alignment”—a perspective that is one-sided, frequently shifting, and lacking in generality and stability. I feel it is necessary to situate this issue within the knowledge system of the social sciences for discussion and analysis, as “good” in its broadest sense is precisely the purpose and theme of much social science research. Whether technology serves the good fundamentally depends on whether it promotes economic development, social progress, and people’s well-being—that is, whether it advances human welfare. The social sciences are capable not only of proposing conceptual pathways toward good, but also of establishing evaluation criteria, implementation pathways, and identifying responsible actors within a universal knowledge framework, drawing on deep academic foundations and theoretical capacity.
1. Rationality Is Good: Efficient Resource Allocation, Increased Social Welfare, and Fair Distribution
“Rationality” is a core concept in economics. Economics defines “rationality” as improved resource allocation efficiency, increased social welfare, and relatively fair distribution. Under this objective, economics provides clear evaluation criteria and indicators: improving total factor productivity (TFP), enhancing input-output ratios, income growth, and promoting innovation investment are all measures of resource allocation efficiency; improvements in education and healthcare, as well as the strengthening of social security systems, are measures of increased social welfare. Measured by these indicators, AI has made notable contributions to improving TFP and social welfare growth—the goodness of technology is indeed significant.
As for how to achieve “rational good,” economics offers both implementation pathways and identifies responsible actors. For example, allowing the market to play a decisive role in resource allocation in AI-related fields is an implementation pathway, which necessarily requires enterprises to be the primary actors. Of course, the market involves not only enterprises but also a sound “market environment,” including fair competition and equitable market access, which in turn requires well-developed market regulation. Measured by fair distribution, however, AI cannot yet be called “good.” The Gini coefficient, income gaps, and similar metrics are all indicators used in economics to assess whether the fruits of development are distributed relatively fairly. By these measures, AI’s impact is currently predominantly negative—that is, there exists an influence of “not-good.” On one hand, wealth is increasingly concentrated in the hands of the very few who succeed in innovation; on the other hand, AI’s displacement effects primarily affect middle- and low-income groups, and there is as yet no sign that continued AI development will improve or reverse this trend. Drawing on the experience of past technological progress, addressing this problem requires effort from AI companies themselves, as well as a greater role for government—maintaining a necessary balance between AI applications whose primary effect is labor substitution and new employment opportunities created by AI, and better fulfilling government responsibilities in improving long-term social security systems.
2. Consumer Benefit Is Good: Gains Beyond the GDP Boundary
Some gains from technological progress cannot be measured by standard GDP growth, yet they generate substantial consumer surplus—or what we might call “use-value benefit” (yongyi). In plain terms, this means bringing convenience, happiness, well-being, and a sense of fulfillment to the people. AI’s impact in this regard is particularly remarkable.
AI brings the good of convenience. The convenience brought by AI is extremely significant, yet a considerable portion of it is not reflected in GDP. For example, consumers’ extensive use of self-service tools based on networks, AI models, and intelligent agents brings great convenience, yet does not generate economic activity that counts toward GDP. On the contrary, it replaces services that were previously counted in GDP—such as self-service ticket booking replacing ticketing services, free online information replacing newspaper subscriptions, email replacing postal mail, and a host of other free services. The cultural industry is the most representative case: entertainment platforms and generative models allow everyone to enjoy more music, books, videos, and richer cultural products, vastly increasing cultural consumption. Yet at the same time, the market size of cultural products measured by GDP has not grown correspondingly. For example, data from the Recording Industry Association of America shows that U.S. music industry revenue fell from $14.6 billion in 1999 to $7.5 billion in 2016—the many benefits that digital music brought to consumers cannot be measured by GDP. While platforms offering various free services generate GDP through advertising pushed to consumers, many studies have found that this is far less than the GDP scale of the replaced services and newly created welfare. Clearly, AI has brought the good of consumer benefit.
AI brings the good of equal access. AI has brought massive numbers of ordinary consumers into domains of consumption and creativity that were previously accessible mainly to high-income and highly educated groups. For example, in the field of cultural consumption, consumers with poor reading ability can choose to have AI provide or generate culturally rich products in formats such as images and video; lower-income consumers can use free platform services to enjoy expensive cultural products and services that would be inaccessible to them offline (such as performances at high-end theaters). Furthermore, in the field of cultural creativity, ordinary people who lack “professional” creative skills can now transform highly creative ideas into cultural products of their own making and share them with others. Influencers on social networks not only sell their products and services but also share lifestyles, emotions, fashion, sentiments, and dreams with their followers, providing consumers with greater satisfaction of spiritual and psychological needs.
The good of consumer benefit manifests through free services, self-entertainment, mutual assistance, and similar means, and cannot be measured by GDP growth or income increases. However, it can be measured using the contingent valuation method (CVM) or willingness-to-pay assessment. Consumers can be asked how much they would be willing to pay if these benefits required purchase, or how much compensation they would need to give up certain benefits that are currently free. For example, how much compensation would make them willing to stop using “Xiaohongshu”-type apps or free large language models? From such data, the total social use-value benefit can be calculated. Research has shown that the ratio of use-value benefit to monetary income is significantly higher for low-income earners than for high-income earners, indicating that AI does indeed have the good of promoting equality and improving the welfare of the low-income population.
Use-value benefit also has its “not-good.” Some consumption that brings momentary psychological pleasure can cause deep, long-term harm to body and mind. For example, addiction to online games, or the narrowing of perception caused by thick information cocoons—the harmfulness of these problems enjoys high social consensus, and those affected suffer greatly yet cannot extricate themselves. Technology holders and users have a responsibility to exercise restraint and self-discipline. If there are no countermeasures, they should refrain from such harmful acts; if there are adverse consequences, they should use technological means to constrain and limit them—just as product manufacturers bear responsibility for product quality and must not sell products that endanger health or life. At the same time, government and society must collaborate in responding. For those “evils” on which there is society-wide consensus—such as challenging the baseline of human values, violating personal privacy, promoting terrorism, and similar speech and conduct—public authorities must take forceful action.
3. Consensus Is Good: Social Agreement on the Long-Term Consequences of Technology
Multiple social science disciplines study “consensus” (heyi). For example, social consensus as studied in sociology represents a relatively high degree of social agreement. This article defines “consensus” as the social agreement that commands the broadest common ground and the social solidarity it determines, and uses the concept of consensus to discuss the ethical issues of science and technology in the AI era.
Ethical issues in science and technology are nothing new, but they have become particularly prominent in the AI era, and their nature has fundamentally changed. In the past, we spoke of science as “discovering the laws of nature”—these were laws inherent in the natural order, formed through the interplay and evolution of various forces over billions of years. Now, AI strives to construct conditions that do not exist in the evolution of either nature or human society, creating new orders, with many explorations aimed at altering the human condition or the state of human society. For instance, in the life sciences—where “AI for Science” applications are most concentrated—much scientific research attempts to alter our physiology, reproduction, cognitive structures, and even intervene in the formation of consciousness, thereby changing humanity’s agency and control in consciousness formation and related behavior. Some efforts aim to construct new life forms whose long-term consequences are unknown. What consequences will arise from the creation of these new entities? Perhaps even the scientists who invent them cannot say for certain. On closer reflection, this is quite different from past scientific discoveries.
Under such circumstances, whether humanity agrees with a certain direction of scientific development becomes very important—this is what this article calls “consensus.” I once told a scientist I greatly admire that, regarding a certain research project of his, I—as an ignorant technology enthusiast—was very curious and eager; as an economist, I could not immediately judge; but returning to the natural identity of being “human,” I wanted to say that his research was entirely “lacking in consensus.” When scientists attempt to alter human characteristics and natural laws that have evolved over tens of millions of years, this has already become a matter of great significance to every person. The public must be informed, must participate, and must express whether they agree. This kind of heavily scientific discussion may be difficult to advance using methods like contingent valuation; instead, it requires public, transparent, and open “collective deliberation.” Scientists have a responsibility to explain to the public all possible consequences—not merely the benefits—while allowing society-wide, thorough discussion to form social consensus commanding the broadest common ground. Only through full expression and sustained negotiation among all parties can an approximation of “consensus” and a realistic position be found. The logic of technology must not be allowed to become the dominant factor; even more importantly, we must guard against irreversible and inappropriate “innovations” hastily carried out by a few technology experts who lack a strong sense of responsibility or sufficient foresight. In short, for these kinds of AI-for-good questions, the requirement of consensus must be present.
Exploring Mechanisms: Multi-Party Collaboration to Promote AI for Good
Let us now consider the mechanisms for achieving good. Apart from the “good of consumer benefit,” which is a natural result of technology itself, “rational good” and especially “consensus good” do not occur naturally. Where, then, do the incentives for good come from? How should corresponding mechanisms be designed? Practice has shown that incentive mechanisms compatible with “good” and factors leading to “not-good” coexist at multiple levels. In the AI era, the forces behind both “not-good” and “good” differ from before, and “good” requires both self-restraint and social restraint.
First, AI innovators and producers have significant and effective incentives toward “good.” An important reason is that AI requires very large-scale adoption; if its “good” does not gain social consensus, it cannot be well and sustainably applied. Society’s high level of attention to AI safety and ethical issues exerts pervasive, powerful, and sustained pressure and value orientation on enterprises and entrepreneurs. Maintaining reputation requires producers to “do good,” and when society perceives them as “not good,” they must respond and adjust quickly. In 2023, OpenAI faced widespread criticism for using sensitive user data in its training, and promptly pledged not to do so again. Several leading domestic AI companies have also had commendable responsive cases. From this perspective, the incentive mechanism for “good” is more pervasive and powerful in this era.
Second, distributed governance is a distinctive feature of AI-for-good governance. The most important difference between AI and data-driven industries and past industries is the scenario-based nature of their applications. In the past, market resource allocation was one-to-one, but in the AI era, resource allocation is cluster-based and scenario-specific. For digital government, smart cities, intelligent transportation, smart healthcare, and the low-altitude economy to be effective, groups upon groups of transacting parties must allocate resources—what we define as distributed resource allocation. In distributed resource allocation, stakeholders with related interests and values form communities of various sizes around specific scenarios, with market and social actors autonomously choosing specific transaction and cooperation partners. Each scenario has its own rules—for example, platforms have their own transaction rules, return policies, and penalties for violations—which define what is “good and not-good” in that scenario, that is, what participants may or may not do. Participants follow these rules, and thus these communities also take on governance functions, which can be called distributed governance.
Third, the governance role of public authority is indispensable. Some seriously consequential “not-good” cannot be left to market and social negotiation; rather, there must be a clear negative list of things that “must not be done”—that is, acts of “evil.” For example, invading users’ privacy without their consent, publishing false information, terrorism, hate speech, and so on. Furthermore, for market and social governance to be effective, the government’s most important function is to mandate openness and transparency. Enterprises must enable consumers to quickly and clearly see their user agreements; transparency in the details of these agreements is extremely important. And as discussed earlier, for innovations related to humanity itself and human society, providers must clearly explain to society and the public what they are doing and what the consequences may be.
Finally, government signaling is also particularly important. Laws need to be relatively stable and cannot easily keep pace with events, nor is it necessary to rush legislation before the situation has relatively stabilized. But there is much that government can do: issuing guidelines and best-practice cases, criticizing improper practices, summoning relevant enterprises for supervisory talk(约谈)—all of these have significant guiding effects on AI for good.
To return to the central thesis of this article: the social sciences must play an important role in promoting AI for good. The social sciences have deep disciplinary foundations that give us greater capacity to judge the good and evil of AI. In terms of resource allocation efficiency, social welfare gains and losses, fair distribution of wealth, assessment of public sentiment and willingness, and the maintenance of social harmony, the social sciences have made outstanding contributions. In the AI era, we must redouble our efforts, shoulder our responsibilities, and stand at the center and the frontier in the discussion, practice, and theoretical construction of AI for good.
More to read:
Jiang Xiaojuan on China's Economic Strategy: Next Stage of Reform, and US Relations
Hello, my readers, for today's episode, I bring you the latest speech from Jiang Xiaojuan (江小涓), a distinguished scholar-official whose unique dual perspective makes her insights particularly valuable. As both a renowned economist and experienced policymaker, she embodies the rare combination of academic research and hands-on governance experience.



