Will We Form Families with AI? Han Song's Question Goes Far Beyond 'Sci-Fi' Topics
-
The hot topics on social media seem to have subtly shifted again in some unnoticed corner.
The internet industry remains in its 'winter,' but leading companies in fields like chip manufacturing and artificial intelligence still manage to draw everyone's gaze.
No one would have imagined before that the personnel changes among OpenAI's top brass and the internal power struggles at Microsoft could be watched like a Chinese drama series Empresses in the Palace on the Chinese internet, sparking so much speculation and discussion.
Perhaps this is because, as AI technology achieves real breakthroughs and begins to reach the general public through consumer applications, society is gradually realizing one thing:
With the explosive development of artificial intelligence, Artificial General Intelligence (AGI) that matches or even surpasses human intelligence in all aspects could emerge in the near future. The Matrix in The Matrix, MOSS in The Wandering Earth, the 'Old Thing' in Interstellar...
When these superintelligences truly arrive, how will we coexist with them?
It's highly likely that society will need to establish a new set of behavioral norms, including how to legally define AI, how to ethically treat AI, and whether humans and AI should be truly equal or if there should be a hierarchy...
If so, who should take the lead?
Recently, science fiction writer Han Song raised this question:
"What impact will artificial intelligence and the metaverse have on gender relationships in the future? Will the concept of family disappear?"
Love and family are often considered the core domains of traditional social and human emotions, representing the most private aspects of human life.
In Hollywood's Fast & Furious series, the protagonist's combat prowess surges with a simple cry of "family." While you might mock the script as absurd, you don't reject the underlying theme—emotional bonds are closely tied to human potential.
The subtext of Han Song's question is this: new forms of intelligent life may penetrate this core realm of human consciousness.
If the information provided by technical experts is accurate, we need to seriously consider some questions. For example, will we form families with AI?
This is not just a legal issue but also concerns the survival of 'anthropocentrism.' The complexity of social ethics in the AI era is something most people are not yet prepared for.
Earlier this year, Elon Musk announced the launch of a catwoman robot, designed to be the 'perfect companion' for male consumers. This robot, with the sexy appearance of a catwoman, is claimed to handle household chores, manage the owner's life, and even bear children.
However, no one asks, 'Can I form a family with a catwoman robot?' because it remains a type of AI device—a commodity without personhood, not an equal intelligent life form to humans.
When discussing love and family in the context of AI, we inevitably refer to 'new species' that break the Turing limit, develop self-awareness, and even surpass human intelligence by far.
To date, most of our literary and artistic works have not actually depicted such AI. There are only some ambiguous 'robot girlfriend' archetypes.
Japanese anime and light novels follow a recurring pattern, exemplified by early works like Chobits. The story structure typically involves 'otaku + moe-female-robot companion', where the machine gradually learns human emotions through interaction, ultimately completing its evolution into 'becoming human' through bonds with the male protagonist.
This remains a typical anthropocentric narrative. Those humanoid computers, apart from being silicon-based lifeforms, bear no fundamental difference from the 'alien species' in traditional fantasies:
They possess supernatural abilities and physical strength far exceeding humans, but don't necessarily have intelligence surpassing humans - especially when it comes to core issues like emotions, they often appear more naive than humans and consistently show submissive admiration for human emotional patterns.
Will super AI be the same? The 'alignment problem' that leading research institutions like OpenAI have been trying to solve precisely addresses this issue.
Traditional AI learning and creation are regulated through human feedback reinforcement learning (RLHF), where humans evaluate and supervise AI outcomes with the aid of various tools.
However, when dealing with super AI that is "much smarter" than humans, and assuming human intelligence remains constant (or improves very slowly), RLHF becomes ineffective once AI capabilities surpass a certain threshold. This means humans can no longer effectively evaluate AI projects.
Currently, Jan Leike, the head of OpenAI's "Super Alignment" team, has proposed dedicating 20% of OpenAI's computing power to develop a "super alignment" system. This system would allow AI to evaluate AI projects, establishing a scalable oversight framework.
Jan Leike's approach involves introducing a randomized controlled trial with tampered answers (RCT with tampered answers). By artificially creating flawed answers, the effectiveness of scalable oversight can be measured—theoretically, the solutions proposed by experts are indeed promising.
However, the idea of putting a 'tight constraint' on AI is not entirely reassuring. As early as the 1940s, Isaac Asimov's science fiction I, Robot proposed the famous 'Three Laws of Robotics,' ensuring peaceful coexistence between robots and humans through a method akin to 'initial implantation.'
Interestingly, for decades, almost all literary and artistic works have imagined the same thing: how robots might break through this safety barrier to harm humans.
Until recently, the domestic TV series The Bionic World still hasn't strayed from this cliché...
Humans are truly fascinating creatures. As imperfect 'creators,' we are filled with wariness and fear towards our own creations, yet we persistently forge ahead in creating machines that may surpass us.
The spirit of Homo sapiens constantly 'tempting fate' on the evolutionary path has long been ingrained in the species' DNA. It is precisely this relentless drive that has enabled humanity to build a glorious civilization.
So... can a creator fall in love with their creation? Clearly, it's possible. AI represents humanity's 'superego,' and for humans, falling in love with the 'superego' is a commonplace occurrence.
Various mythologies contain similar tales—fairies, elves, and goddesses bathing in the mortal world, only for young men to steal their garments, sparking love between them. The Eastern tale of the Cowherd succeeded this way, just as Celtic mythology tells of fishermen stealing selkies' sealskins to keep them on the island...
In these stories, it's logical for mortals to fall in love with goddesses, as these deities represent perfected humans with supernatural powers and breathtaking beauty—everything crafted according to humanity's most cherished ideals.
The concept of gods sharing human traits should be rephrased: deities were originally shaped by humanity's "superego imagination," hence there's no emotional barrier between gods and humans. Zeus would still admire mortal women's beauty, transforming into a swan to possess Leda; Aphrodite would still adore beautiful youths; the Weaver Girl would find contentment with the Cowherd...
With AI, humanity's role appears to have changed, yet AI shares the same fundamental attribute as deities—both are products of humanity's "superego imagination."
In The Three-Body Problem, Luo Ji painted his perfect companion Zhuang Yan, embodying his deepest admiration. Both goddesses and AI are essentially humanity's self-created "Zhuang Yan."
Then, we must confront another question: Will super AI fall in love with humans?
Humans have many reasons to love AI—such as their ability to manifest beautiful appearances and possess extraordinary capabilities, fulfilling our "superego fantasies." But what reasons would AI have to love humans?
As Huang Tiejun, Director of the Peking University Institute of Intelligence, stated in response to Tencent News' "20 Years, 20 People, 20 Questions" series: "After the emergence of superintelligence, the issues become highly complex. Beyond 'Can we trust AI?' there's also the question of 'How can AI trust humans?'"
Director Huang explained that, much like the process of building mutual trust between people, humans and AI need to go through a磨合期 (breaking-in period). This involves "listening to their words" and "observing their actions"—that is, verifying whether AI's claims align with reality by sharing human knowledge systems constructed through language and testing them in practice. Only then can mutual trust be established. In fact, this is how trust systems in human society are gradually built.
Regarding the question of humans falling in love with AI, the answer is similar. Whether AI can love humans depends not only on how AI operates and evolves but also on human factors themselves.
Professor Ma Zhaoyuan from Southern University of Science and Technology provided an emotionally based answer in '20 People, 20 Questions': We are becoming increasingly aware that the reason humanity navigates the vast universe relying on such a tiny planet is due to the greatness of human wisdom.
This wisdom encompasses rationality but is not constrained by it. Because we understand the limitations of rationality, these limitations require human emotion to compensate and guide. This might become the norm for our future coexistence with machines.
We guide them, collaborate with them in division of labor. They can accomplish more and more tasks with increasing reliability (trustworthiness), thereby granting human emotions greater freedom.
We might speculate from this that it's possible for super AI to fall in love with humans.
Just as the generation of AI intelligence remains an unknown 'black box' to us, there may also be a 'black box' aspect of humans that pure rationality cannot comprehend for AI.
We eagerly anticipate deciphering AI's 'black box,' while AI may also attempt to decipher humanity's 'black box'—in this mutual 'deciphering' process, they not only achieve 'alignment' with human values and wisdom but also develop emotional modules similar to humans, allowing them to transcend 'species' barriers to appreciate and admire intelligent life that resonates with them spiritually.
Many things remain unknown to this day, but it is precisely in the 'X' representing the unknown that elements that make the world fascinating are hidden. If goddesses can fall in love with humans, why can't AI?
Returning to the question posed by science fiction writer Han Song during Tencent News' 20th-anniversary special event '20 Years, 20 People, 20 Questions': 'How will artificial intelligence and the metaverse affect gender relationships in the future? Will the concept of family disappear?'
His friend Chen Qiufan responded by suggesting that the metaverse inherently contains assumptions about future gender and family relationships, which will become increasingly diverse.
Gender relationships will no longer be confined to physical existence; emotional connections and spiritual resonance will dominate. At the same time, there will be more 'cross-boundary' phenomena in gender relationships, such as the blurring line between reality and virtuality in The Matrix, which future couples may transcend.
In reality, "love" itself is a concept difficult to discuss. When we talk about "possibility," love is always possible.
It can exist between feuding families, between celebrities and the destitute, across all genders, and even transcend racial barriers between humans and extraterrestrial life or non-living entities... Love transcends utilitarianism, social classes, and rationality. In a sense, this means it can always leap beyond logical frameworks.
If AI truly evolves into super-intelligence, will they develop this unstable factor called "love," or eliminate it as irrelevant data? Judging by the direction of "aligning with humans," the former seems more likely.
Relatively speaking, 'family' is a more rational topic.
Engels made a statement in The Origin of the Family, Private Property and the State: The essence of family is a property relationship. After productivity advanced enough to retain surplus products, public ownership gradually gave way to private ownership. In a sense, concepts such as pair marriage, clans, and families are byproducts of private ownership.
Family is a fluid concept—feudal families differ greatly from modern families, and primitive clans are also distinct from feudal families. This conceptual evolution fundamentally adapts to changes in social structures.
Ethics, moreover, serves social structures. Max Weber's The Protestant Ethic and the Spirit of Capitalism contains an important argument: it combines the profit-seeking spirit and ultimate legitimacy of capitalism with religious forms, asserting that commercial profit aligns with Protestant ethics and can be regarded as the ultimate goal of life. Ancient religious ethics must serve nascent capitalism—all history is contemporary history.
So, will AI's involvement in human emotions and the widespread integration of humans and AI change the fundamental form of families, or even make families disappear?
If we think about this question at its core, what it's really asking is: Will AI change private ownership? Will the 'spirit of capitalism' come to an end?
The answer is quite straightforward: within the foreseeable future, the emergence of super AI will not undermine private ownership. The stability of family ethics fundamentally stems from property systems, not love or anything else.
For example, when fathers are no longer the absolute core of family income, feudal patriarchal families gradually disintegrate. However, modern societies still retain the family structure, indicating that modern people still need this way of organizing and distributing property.
Whether AI can disrupt existing family structures depends on the extent to which it can participate in social wealth distribution. When AI itself remains a form of 'property,' the concept of diversified families is implausible.
Chen Qifan discusses a distant future where only when AI breaks through physical limitations and gains control over wealth distribution might we see the emergence of more community-based families formed around shared interests, values, or lifestyles rather than blood ties.
By that time, will society still maintain fundamental private ownership? If the foundation of private ownership is shaken, it's entirely conceivable that the concept of family could become a relic of history.
The '20 Questions with 20 People' is just the beginning. In the foreseeable future, humanity will discuss AI with high frequency and intensity. Throughout history, societal transformations driven by advancements in productivity have always been among the most significant topics—without exception.
Love and family are merely small parts of this grand issue. It relates to how we understand society and ourselves, often requiring prolonged contemplation to reach provisional conclusions.
One thing is certain: we must never stop asking questions.