Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. What Has DeepMind Done This Time to Make AI Think Like Humans?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

What Has DeepMind Done This Time to Make AI Think Like Humans?

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    The road is long and arduous, and the dawn of AGI (Artificial General Intelligence) will not arrive soon. But it is precisely because of the efforts of AI research institutions like this that explorers in the dark night can always keep the flame of hope alive.

    On the path to AGI, there is always a vast chasm standing before researchers: AI's understanding of causality. Inferring causality is an extremely complex problem even for humans themselves.

    Whether through deductive or inductive reasoning, humans can always construct intricate causal inferences from complex relationships. Regardless of correctness, this ability sets humans apart, making us the solitary star at the top of the biological chain.

    Imagine taking your daughter to a summer camp, where you meet an adult woman with a young girl. You might conclude that the woman is the girl's mother. A few weeks later, you see the same girl again at a coffee shop near your home, but this time she is with an adult man.

    Based on these two observations, you can infer that the man and woman are somehow related. However, determining whether they are spouses or have another relationship would require more information, such as their ages, clothing styles, and the level of intimacy with the child.

    In this scenario, we can make causal inferences based on factual relationships observed across extended periods and locations. We call this type of reasoning in complex relationships "long-range reasoning."

    For current AI technologies, facial recognition from a database is straightforward. If provided with a knowledge graph of their identities, AI can also recognize them. But if only given the above scenario (separate images for AI) without any additional information, can AI establish causal reasoning from the described facts?

    This time, DeepMind's latest research proposes a solution for AI to perform "long-range reasoning."

    Recently, DeepMind published a paper submitted to the ICLR 2020 conference titled "MEMO: A Deep Network for Flexible Combination of Episodic Memories." The paper introduces a new architecture—MEMO—that enhances the reasoning capabilities of existing deep neural networks.

    MEMO possesses the ability for long-range reasoning, meaning it can identify distant relationships among multiple facts in memory.

    So, how does MEMO perform in practice? What does this new deep neural network mean for AI development? These questions still require our reflection and answers.

    Why is long-range reasoning so important?

    To discuss the importance of long-range reasoning, we first need to understand the meanings and relationships of perception, memory, naming, facts, judgment, reasoning, and action.

    Historian Yuval Harari mentioned in Sapiens that "human虚构能力" (the ability to imagine non-existent things) was the decisive factor in Homo sapiens' victory over other species. However, he oversimplified the explanation of complex issues. If you carefully examine your daily life, you'll find that we almost always think and act based on these cognitive abilities.

    We perceive the external world through our senses, forming perceptions that are merely sensory elements in time and space. Our brains then distinguish and name the elements we pay special attention to (while other information becomes background), forming facts. Through the brain's connective abilities, these named elements are linked with logical words to form judgments. Then, by summarizing past experiences and imagining the future, we form reasoning, which leads to plans, actionable steps, and finally, action.

    Of course, this entire process happens almost instantaneously, leading many to overlook the complexity of cognition.

    Here's a vivid example. While writing this article, my two-and-a-half-year-old daughter was in the living room holding a plastic spray bottle and shovel, pretending to scoop and spray "water" into the air while saying words like "water," "wet," and "wipe."

    My daughter, like humanity in its childhood, has learned to observe and distinguish objects around her, then name them (cup, shoe, bottle), and even understand relationships between objects to make causal inferences (a bottle can hold water, a shovel can move things).

    Most impressively, she can imagine things that don't exist, like pretending to scoop "water" with the shovel and pour it from the spray bottle, "wetting" the floor or her shoes. She even "conditionally" remembers the adults' repeated admonition to "wipe up wet things" and tries to find something to clean up the non-existent "water."

    While many animals can make and use tools more skillfully than a two-year-old, they still cannot match a human child's ability to imagine, reason, and plan for things that haven't actually happened. This uniquely human talent is truly astonishing and something to be proud of.

    Drawing on Turing Award winner Judea Pearl's distinction in The Book of Why: The New Science of Cause and Effect between three levels of human cognitive abilities—seeing, doing, and imagining—let's delve deeper into what current AI can and cannot do.

    The first level is observation: observing facts A and B, then forming judgment X about their relationship, which influences conclusion Y.

    For example, the classic philosophical syllogism: we observe a person (fact A) called Socrates (fact B), leading to "Socrates is a man" (judgment X). We also know the irrefutable truth: "All men are mortal" (judgment Y). Finally, we conclude: "Socrates is mortal" (conclusion Z).

    Don't underestimate this ability. It is through such powerful judgment that humans form experiences, enabling us to triumph in the harsh process of natural selection.

    The second level is intervention: implementing intervention X or Y to produce result Z. Continuing the example, if we intervene in "Socrates is a man"—say, by deifying him like Jesus—even if he drank poison and died, because he "became a god," we could conclude "Socrates did not die."

    This hypothetical scenario may seem absurd, but it is precisely these abilities that allow us to engage in breeding, agriculture, mining, and the establishment of religions, city-states, and empires. Human civilization's interventions in nature over a few hundred years have surpassed the impact of millions of prior years.

    The third level is counterfactual reasoning, involving human imagination and reflection: if judgment X or Y leads to conclusion Z, then if X or Y had not occurred, conclusion Z would change.

    Suppose humans invent a time machine and an immortality drug. We travel back to ancient Athens, replace the poison with the immortality drug, and give it to Socrates. Judgment Y is overturned, and conclusion Z changes.

    It is these exaggerated imaginations that enable humans to propose scientific hypotheses, construct knowledge systems like relativity and quantum mechanics, and create literature and art.

    So, at which level is current AI in mimicking human intelligence? The more optimistic you are about AI, the more disappointing the answer.

    Even the most successful deep learning algorithms remain at the first level of these three cognitive abilities, with intelligence comparable to an owl observing whether a mouse is present.

    Although machine learning, especially deep learning, surpasses humans in areas like image recognition, speech recognition, autonomous driving, and board games, its model is still "driven by a series of observations, aiming to fit a function... Deep neural networks merely add more layers to the complexity of fitting functions, but the fitting process is still driven by raw data... Any system operating on the ladder of causality inevitably lacks this flexibility and adaptability."

    This means that machine learning and deep neural network algorithms only fit correlations in input data without understanding causality. Thus, AI cannot ascend from the first level of cognition to the second, unable to answer questions about interventions.

    While the above background may seem lengthy, MEMO's long-distance reasoning capability holds a significant position across the three cognitive levels we described. MEMO represents a successful attempt for deep neural networks to develop long-distance causal reasoning, potentially serving as a better stepping stone for AI's transition from the first to the second cognitive level.

    First, MEMO draws inspiration from the 'associative inference' capability in neuroscience, particularly from recent studies on the hippocampus. The hippocampus stores memories independently through a process called 'pattern separation' to minimize interference between memories. Recent research also shows that these independently stored memories are retrieved and integrated via a recurrent mechanism, enabling flexible combinations of individual experiences to infer unobserved relationships—ultimately forming reasoning.

    DeepMind researchers state that they were inspired by this neuroscientific model to study and enhance reasoning in machine learning models. Compared to prior reasoning systems, MEMO introduces two new components:

    1. First Component: MEMO adopts the basic structure of EMN (End-to-End Memory Networks) for external memory representation but incorporates a novel task called PAI (Paired Associative Inference), inspired by hippocampal mechanisms. This allows flexible weighting of individual memory elements to enhance reasoning.

    2. Second Component: To address excessive computation time, MEMO introduces a 'REMERGE' (re-emergence) model inspired by human associative memory. Here, retrieved memory content is recycled as new queries, and differences between retrievals at different time steps determine whether the network adapts to a fixed point.

    MEMO employs a 'halting policy,' where the network outputs an action (in reinforcement learning terms) indicating whether to continue computing or answer the task. A binary halting random variable is introduced to minimize expected computation steps.

    With these components, MEMO achieves superior performance in three empirical tests:

    1. Associative Paired Inference: On smaller query sets, MEMO matches DNC's accuracy, outperforming EMN (even with 4-10 hops) and UT. For longer sequences, MEMO is the only architecture capable of solving complex queries.

    2. Shortest Path in Random Graphs: On 10-node paths, DNC, UT, and MEMO perform perfectly. On 20-node paths, MEMO surpasses DNC by over 20% in highly connected graphs.

    3. BABI Task QA: In 10k training sets, MEMO is the only model successfully answering complex queries on longer sequences.

    MEMO's innovation lies in its neuroscientifically inspired architecture for associative inference, validating the hypothesis that reasoning arises from flexible combinations of separately stored memory elements.

    From its inception, AGI has been DeepMind's goal. Founder Demis Hassabis advocated a systems neuroscience approach to AGI, focusing on understanding the brain's 'software' rather than its 'hardware.' MEMO's success suggests that simulating neural mechanisms could be a viable path to AGI, though challenges like causal reasoning remain.

    MEMO is a crucial step toward bridging this gap. To reach AGI, deep learning must delve into the valleys of human causal cognition—leaps of association, conditional interventions, counterfactual reasoning—before ascending again.

    The road is long, and AGI's dawn is distant. Yet, efforts by institutions like DeepMind keep the flame of exploration alive in the dark.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups