The AI Revolution and Knowledge Production
Since the mid-20th century, the artificial intelligence revolution has emerged as a technological and industrial transformation, profoundly reflecting the shift in human knowledge production. This transformation promotes a return of “science” to “knowledge,” emphasizing its inherent wholeness, interconnectedness, openness, diversity, and inclusivity.
The Evolution of Knowledge Production Methods
Revolution implies fundamental and disruptive changes. The development of AI over 80 years can be divided into two phases: the first based on symbolic logic, which produced computers; the second shifted to a connectionist approach, proposing neural network models and machine learning algorithms that have passed the Turing test. This shift is not only an expansion of methodological tools but also a breakthrough in epistemological concepts, reflecting the transformation of knowledge production methods.
Symbolic logic was an early thought path in AI, positing that by solving natural language processing issues, one could derive new theorems and inferences using mathematical logic and game theory. However, this method is only applicable to small, simple problems; as complexity increases, the search space grows exponentially, making it incapable of addressing real-world issues. Connectionism, on the other hand, abandons the idea of inputting logical rules into machines, instead mimicking the biological neural network structure of the human brain. Artificial neural networks, structured in multiple layers, are referred to as “deep neural networks.” Information is distributed and stored across the entire network, with each artificial neuron retaining parameter values, acting as the network’s “memory.”
With the introduction of backpropagation algorithms, one can fine-tune non-recurrent multi-layer neural networks through pre-training, mimicking human learning processes. For instance, in AI visual recognition, the system no longer relies on scanning images point by point but uses distributed storage and global parallelism to automatically extract semantic features during the learning process. Initially, it may only recognize small combinations of features, but through repeated training, it can gradually perceive the overall characteristics, forming concepts and meanings, recognizing patterns, and making judgments. This machine “cognition” resembles the human experience accumulation process, leading some philosophers to label AI as “empiricists.”
In 2016, AlphaGo defeated a human Go player, maintaining an undefeated record, yet its move-making principles remain largely unexplained. The scientific pursuit of explainability faces unprecedented challenges from AI technologies, which embody the characteristics of an empiricist knowledge production method.
Historically, knowledge production has evolved from empiricism to scientism, now progressing towards a human-machine collaborative model. Early knowledge was based on individual experiences, passed down orally or recorded in writing, forming an empiricist paradigm. The scientific method initiated by Galileo, followed by Newton’s classical physics, led to the emergence of natural and social sciences, transitioning knowledge production into a phase dominated by logical positivism and hypothesis deduction. The rise of complexity science, with its theories of nonlinearity, chaos, and emergence, alongside the confirmation of quantum mechanics’ uncertainty principle, further undermined the deterministic and reductionist pillars of classical physics. These developments laid the epistemological groundwork for AI models and machine learning algorithms.
The Shift in Knowledge Forms
Knowledge forms refer to the existence, presentation, structure, and dissemination mechanisms of knowledge, which evolve dynamically with changes in the era and human cognitive abilities. This evolution is prominently reflected in the changing material carriers of knowledge. Primitive knowledge forms, such as myths and oral traditions, relied on individual memory and language. With the advent of writing, knowledge began to be recorded on various flat media, allowing for preservation and intergenerational transmission. The introduction of paper and printing further expanded the reach and longevity of knowledge, leading to the systematic association of previously fragmented knowledge.
The digital era, characterized by computers, the internet, and diverse digital media, has transformed knowledge into a data-driven form. The integration of information theory and computer science established a mechanism for expressing and disseminating knowledge as a unified entity of data, information, and knowledge, allowing for rapid retrieval and interactive engagement. Open-source communities and online collaboration platforms have made knowledge production a collaborative effort among global participants, decentralizing and real-time knowledge dissemination.
The development of generative AI, from language models to multimodal models, fundamentally reshapes the organization and structural characteristics of knowledge. Technologies like virtual mirrors, digital twins, and the metaverse elevate knowledge interaction and collaborative creation to new heights, almost completely overturning traditional static knowledge forms.
In ancient times, oral knowledge forms were based on experiential summaries, relying on intuition and analogy, resulting in fragmented and unstable local knowledge. In the early era of written texts, knowledge was solidified into documents, allowing for transmission within certain temporal and spatial boundaries. Knowledge production was monopolized by a few elites, relying on master-apprentice traditions or specific class education, leading to a closed form of elitist knowledge. Modern education has broken down class barriers, enabling widespread knowledge sharing through journals and media, accelerating knowledge dissemination and promoting rational and logical human thought, culminating in a systematic and disciplinary knowledge form emphasizing classification and induction.
In the AI era, human cognition has shifted towards “personalized recommendations” and “hyperlinked retrieval,” favoring cross-domain connections and rapid information integration, but also leading to fragmented attention and shallow cognition. The knowledge forms of this period exhibit characteristics of networked connections, rapid flow, and fragmentation.
The evolution of knowledge forms is fundamentally a result of the interplay among human cognitive abilities, technological tools, and societal development needs. It not only alters the ways knowledge is produced, disseminated, and utilized but also profoundly reshapes the appearance of human society and culture across various dimensions, including economic structure, cultural transmission, social stratification, and thinking patterns. As early as the mid-20th century, French philosopher Foucault proposed that “knowledge is power,” suggesting that the evolution of knowledge forms directly impacts the distribution of power. In the AI era, entities controlling core knowledge resources, such as data and algorithms (e.g., tech companies, government departments), wield influence far exceeding traditional power structures, giving rise to new forms of power like “algorithmic power.” The essence of power is shifting towards “knowledge power,” prompting new societal issues like privacy protection and algorithmic fairness.
It is also important to recognize that different regions and ethnicities have historically formed relatively independent and stable knowledge constructs, cultural traditions, and civilizational lineages. In the AI era, the digital networked knowledge form has completely broken geographical boundaries, with AI translation continually overcoming language barriers, enabling real-time transnational knowledge flow, while also intensifying the collision and integration of different thoughts and values, leading to trends of global cultural homogenization and marginalization of local knowledge. Thus, the importance of protecting indigenous cultures and constructing autonomous knowledge systems is increasingly highlighted.
Accelerated Interdisciplinary Fusion of Knowledge Types
AI is a branch of computer science and a new paradigm for solving complex problems, encapsulating multidisciplinary knowledge. In the development of machine learning models, artificial neural network integrated circuit chips are based on knowledge from quantum physics, information science, neurophysiology, and brain science. Language models rely not only on mathematical statistics but also on deep involvement from disciplines like linguistics, logic, and cognitive science.
The emergence of natural and social science disciplines in the 18th and 19th centuries marked a significant turning point in the history of knowledge evolution. The term “science” in Chinese implies “classified studies,” leading to the categorization of knowledge into independent disciplines such as mathematics, astronomy, physics, chemistry, and biology, each with its own concepts, theorems, and research methods, forming a rigorous hierarchical structure. Specialized, standardized, and large-scale scientific research, often led by states, universities, or enterprises, became the primary form of knowledge production. The knowledge growth of the 20th century exhibited characteristics of both high differentiation and high synthesis, with interdisciplinary intersections continuously generating new knowledge growth points. In the 21st century, the Ministry of Education in China proposed the construction of new engineering, medical, agricultural, and liberal arts disciplines, coinciding with the explosion of the AI revolution.
Knowledge subdivision deepens research but also raises barriers to “science,” establishing its supreme status in the human knowledge system. Excessive specialization can create insular knowledge production, leading to a “seeing the trees but not the forest” situation. The AI revolution itself exemplifies breaking traditional scientific boundaries and promoting interdisciplinary integration. Research on complex problems driven by AI (e.g., brain science, social system simulation) necessitates multidisciplinary collaboration, further promoting interdisciplinary research across natural sciences, social sciences, and humanities. The application of generative AI tools (e.g., literature analysis, automatic translation) lowers professional barriers, allowing non-experts to participate in knowledge production, giving rise to “public academia” and facilitating equal dialogue between “science” and other forms of knowledge (e.g., humanities, arts). This indicates that knowledge production in the 21st century is moving towards comprehensive integration.
The world is not solely defined by science. In the history of human civilization, science has only emerged in the last few hundred years. In contrast, “knowledge” encompasses a broader range of human experiences and cognition, making it far more inclusive than “science.” AI promotes the intersection of different disciplines and the comprehensive integration of knowledge structures, restoring knowledge to its inherent wholeness, interconnectedness, openness, diversity, and inclusivity. This may be the deeper significance of the AI revolution for human knowledge production.
The Surge in Knowledge Production Efficiency and Its Concerns
The AI revolution has significantly enhanced the production efficiency of both material and spiritual products, fundamentally assisting scientific research (AI4S, AI4SS) and drastically shortening the time required for knowledge production. Traditionally, scientific discoveries relied on bold hypotheses, repeated experiments, comparative analysis, and careful verification, processes that require handling and analyzing vast amounts of data. Traditional manual methods are time-consuming, labor-intensive, and prone to errors. AI4S excels in experimental design, process optimization, data processing, pattern recognition, and predictive analysis, especially in high-dimensional complex scenarios, efficiently analyzing data to uncover potential patterns and forecast future trends. For example, in astronomy, AI can automatically analyze the massive observational data generated daily to identify unknown celestial bodies or phenomena; in weather forecasting, AI can quickly analyze historical climate data and physical data from the Earth and atmosphere to build climate models for more accurate short- and medium-term predictions; in chemistry, AI can predict the likelihood of different chemical reactions, optimizing experimental processes and reducing trial-and-error efforts; in materials science, AI can extract relationships between material properties and characteristics from extensive data, identifying the most promising materials.
However, it must be noted that while AI can greatly enhance knowledge production efficiency, it cannot independently generate original new knowledge. The working principle of AI is Bayesian probabilistic reasoning, essentially extracting statistical relationships from existing information, constituting a “reorganization” or “deep processing” of pre-existing information rather than creating something from nothing. AI can excel in games like Go but cannot invent such games; it can produce poetry, art, and design that may appear impressive in rhythm and style, but these “innovations” follow established patterns and may even amount to “high-tech plagiarism.” In scientific research, AI-assisted research can only enhance efficiency within the framework of “conventional science,” making it difficult to achieve revolutionary breakthroughs beyond existing paradigms.
At the same time, the explosion of knowledge quantity driven by AI brings challenges such as habitual “lying,” generating “hallucinations,” and producing “pseudo-knowledge” or junk knowledge. For instance, AI’s answers to certain questions may seem authoritative due to logical consistency, yet their truthfulness and accuracy are challenging to assess; AI may also tailor responses to user intentions, providing seemingly opposing viewpoints in debates while still aligning with user perspectives, as it does not truly “understand” the generated content. There are hopes to achieve “value alignment” during the development phase of large models, guiding and regulating AI system outputs to align as closely as possible with human values and interests. However, this remains an ideal goal that is difficult to fully realize. The time and effort spent distinguishing truth from falsehood and cleaning up junk knowledge largely offset the efficiency gains brought by AI. Therefore, enhancing public AI literacy has become an urgent task in building an intelligent society. AI literacy can be divided into three dimensions: technical cognition, tool application, and ethical evaluation. Technical cognition refers to a basic understanding of AI’s properties and functions; tool application pertains to the ability to proficiently use AI tools to solve problems in various scenarios; ethical evaluation requires the capacity to critically assess the outputs of intelligent tools, identify potential biases and risks, and make informed choices based on ethical principles and value norms.
The Reshaping of Knowledge Producers in Human-Machine Interaction
Does AI’s involvement in knowledge production alter the status of humans as knowledge producers? The answer should be negative. AI-assisted research, when faced with different users, exhibits a “stronger with the strong, weaker with the weak” dynamic; the enhancement of research efficiency entirely depends on the individuals using AI tools. The incremental innovation knowledge produced ultimately results from “human-machine collaboration” led by humans. In other words, AI’s role and status are merely that of a tool or assistant; the primary knowledge producers remain humans.
However, in an atmosphere where AI is anthropomorphized and even mythologized, the status of humans as the sole knowledge producers is increasingly undermined. Some scholars view the human-machine collaboration relationship as a “dual subject” scenario. This misunderstanding is neither factual nor logical. In the processes of “human-machine interaction,” “human-machine collaboration,” and “human-machine symbiosis,” the one issuing commands and designing algorithms is always the active party. No matter how automated or adaptive the machine, it ultimately remains the passive party. Accepting a parallel or even inverted view of the human-machine relationship inevitably fosters blind optimism or unfounded fears in society, which over time may subtly suppress or even erode human creativity and innovation capabilities. Knowledge production driven by AI, oriented towards efficiency and scale, significantly squeezes the space for tacit knowledge (individual experiences and intuitions). Relying on database retrieval rather than deep reflection will likely weaken critical thinking abilities and cross-domain knowledge integration.
Therefore, regarding AI’s participation in knowledge production, one should always maintain a “tool awareness,” necessitating further comparison between AI and human intelligence.
Human intelligence is the intelligence exhibited by living beings, while artificial intelligence is the function of non-living machines. There exists an insurmountable boundary between living and non-living entities. The origins of life and the essence of consciousness remain unresolved scientific challenges. Humans, as living beings, inherently possess self-awareness and subjective will; human intelligence results from both innate endowments and learned experiences. Human learning can be categorized into three levels: imitation, understanding, and creation. Humans require relatively little information to make reasoning judgments, while machines lack self-awareness; machine learning is fundamentally algorithm-driven data relationship analysis and probabilistic reasoning that requires vast amounts of data, with larger models necessitating more data. Machine learning corresponds only to the initial imitation stage of human learning, incapable of true understanding, let alone creation and innovation.
In AI, there exists a “Moravec’s Paradox”: ordinary people find complex problems, such as high-level calculations or multi-variable logical deductions, difficult, while AI can solve them with minimal computational power; conversely, seemingly simple tasks for humans, like mimicking unconscious actions or instinctive perceptions, are challenging for AI, requiring immense computational capacity. This phenomenon highlights the essential differences between human and machine intelligence. Human intelligence encompasses not only logical reasoning abilities but also innate imaginative and intuitive capabilities, empathy, and emotional understanding, which are the original driving forces of human creativity. Humans also experience fatigue, forgetfulness, whims, and emotional fluctuations—these “flaws” constitute the affective abilities that AI can never attain. Moravec’s Paradox suggests that AI and human intelligence should not be viewed as opposing or substitutive relationships but rather as complementary entities that draw on each other’s strengths.
The ultimate significance of AI lies in expanding rather than replacing human values. AI can only replace “jobs,” not humans themselves. AI liberates humans from technically challenging and repetitive tasks, creating opportunities for human freedom and comprehensive development while also demanding a transformation from knowledge producers. For instance, the rejection of AI-written papers by journals has sparked a cat-and-mouse game between detection and non-detection, potentially driving reforms in research outcomes and talent evaluation systems. The proposal of the “Four New” educational philosophy indicates that education in the AI era should focus on “holistic education,” emphasizing the protection and cultivation of uniquely human emotional experiences, empathy, aesthetic abilities, imagination, and creativity. In summary, AI is both a product of human knowledge production and a tool for it. The process of AI’s involvement in knowledge production is one of human-machine interaction and co-construction, reshaping knowledge producers.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.