Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. To AI: Are You the Greatest Opportunity of My Life?
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

To AI: Are You the Greatest Opportunity of My Life?

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    In this world, some choose to see ugliness and chaos, but I choose to see the beauty.

    Let's conduct a thought experiment:

    Although humans often do foolish things, the brain is incredibly powerful, consuming only 20 watts of energy. If we were to build a computer with similar brain-like functionality, its energy consumption would be a trillion times higher.

    Question: If humans invented a technology that could halve the energy consumption of such a computer every week, how long would it take for the computer to become as efficient and energy-saving as the human brain?

    Answer: Just 40 weeks.

    2 to the power of 40 is an astonishing number.

    What's the secret behind this?

    Exponential technological growth.

    Over the past half-century, Moore's Law has 'miraculously' doubled the performance per unit cost of integrated circuits every 18-24 months. As a result, the smartphones we hold today have 120 million times the computing power of the Apollo spacecraft that landed on the moon.

    Ray Kurzweil's core argument is the Law of Accelerating Returns.

    He believes this is a fundamental theory in information technology, following a predictable exponential growth pattern, contrary to the traditional notion that 'you cannot predict the future.'

    While many things remain uncertain (e.g., which projects, companies, or technologies will dominate the market, or when peace will come to the Middle East), the fundamental cost-performance ratio and information capacity are indeed predictable.

    Even more astonishing is that these changes are unaffected by factors like war or peace, prosperity or recession.

    Image from 'The Future of Artificial Intelligence'

    With this 'secret weapon,' Ray Kurzweil has maintained an impressive record of predictions, including accurately forecasting AI's victories over humans in chess and Go.

    Bill Gates called Ray Kurzweil 'the best person I know at predicting artificial intelligence.'

    As for the future, Ray Kurzweil has made the following predictions:

    Will these predictions come true?

    Historically, humanity hasn't had a great track record in predicting AI.

    In the 1960s, two ambitious 10-year goals were set:

    Simon also gave a 10-year timeframe.

    As we know, the seemingly monumental moon landing succeeded, while deciphering the human brain—weighing less than 1.4 kilograms—remains a mystery today.

    Herbert Simon was no idle talker. He was a rare interdisciplinary genius, holding nine doctoral degrees and serving as a professor of computer science and psychology at top universities. He pioneered decision theory, conducted econometric research, and was a brilliant mind in both business and government.

    Herbert Simon saw an opportunity: using computers as universal processors for symbols (and thus for thought) rather than merely as fast engines for arithmetic.

    By the end of 1955, he and his collaborators invented the list-processing language for computer programming and used it to create the first computer program capable of solving non-numerical problems through selective search.

    From then on, computers could not only calculate but also 'scheme.'

    Herbert Simon explained in plain terms:

    We invented computer programs capable of thinking about non-numerical problems, thereby solving the age-old mind-body problem and explaining how a material system could possess mental attributes.

    This opened the door to automating many tasks previously achievable only by human intelligence and provided a new method for studying thought—computer 'simulation.'

    Thus began a long-standing debate: Can machines think?

    Herbert Simon believed they could.

    In 1975, he won the Turing Award, the highest honor in computer science.

    In 1978, he received the Nobel Prize in Economics.

    Yet, even the smartest minds on Earth underestimated the complexity of the human brain. To this day, intelligent machines remain far from matching it.

    Tracing the origins of 'intelligent machines' takes us back over 300 years to a man named Pascal.

    Certain individuals appear at crucial crossroads in human history, prophetically bringing together seemingly unrelated elements.

    Pascal lit three lamps related to intelligent machines:

    (1) Human self-awareness

    Pascal said humans are but reeds, the most fragile things in nature. Yet, they are thinking reeds.

    Even if the universe crushes them, humans remain nobler than what kills them.

    (2) Calculating machines

    Pascal's father was a tax supervisor. To help ease his father's heavy calculation workload, Pascal designed this machine—the only mechanical calculator of the 17th century.

    (3) Probabilistic thinking and calculation

    Pascal and Fermat, through their correspondence, initiated the first substantive study of probability theory as a branch of mathematics.

    Upon reflection, Pascal's lament that humans are mere reeds and his invention of a calculating machine seem contradictory. Using precise mathematics to ponder seemingly uncertain probabilities also appears contradictory.

    This suggests that thinking about intelligence has never been a singular proposition but requires multidisciplinary approaches.

    Over 300 years later, we recognize Pascal's three lamps as corresponding to cognitive science, computers, and algorithms.

    The essence of computers is to 'mimic' human thought through code. To achieve this, we must first 'decode' thought.

    Aristotle believed logic was the foundation of all science, pioneering formal logic.

    After him, logic and mathematics diverged, with computable and non-computable seemingly distinct. It wasn't until Leibniz attempted to bridge the two. He combined logic and mathematics, forming a third innovative idea: 'heterogeneous association.'

    Leibniz invented binary arithmetic capable of addition, subtraction, multiplication, division, and root extraction, laying the groundwork for symbolic logic, which later evolved into mathematical logic.

    Leibniz was passionate about 'everything being computable,' reportedly often saying: 'Come, let us calculate.'

    In 1651, Thomas Hobbes proposed a groundbreaking idea in his masterpiece Leviathan:

    In this sense, 'reasoning' is nothing but 'calculation'—adding and subtracting the results of symbols and expressions in our minds. When we calculate independently, we call them 'symbols'; when we demonstrate and prove our calculations to others, we call them 'expressions.'

    Subsequently, great ideas and formulas intertwined, leading to the fusion of different disciplines:

    If we step beyond the modern definition of computers, we can trace back further to the 'father of the general-purpose computer,' Charles Babbage, and Ada, who wrote the earliest software for his machine.

    Perhaps due to inheriting the blood of her father, the famous poet Lord Byron, Ada made a prediction far ahead of her time: Machines could not only calculate but also compose music, write poetry, weave, and perform other complex tasks.

    Looking back from Aristotle at humanity's bold explorations in inventing computers, we realize: Thinking about 'thinking machines' requires a mind that bridges science and humanities, connects different realms of wisdom, and embraces the boldest imagination.

    Today, reflecting on humanity's current historical stage in 'computation' and 'intelligence,' we can't help but marvel:

    We are incredibly lucky.

    For three reasons:

    It's all happening now.

    You might think AI is still distant—a lab concept, challenging world champions in Go, impressive but 'useless.'

    In reality, through the internet and smartphones, AI has already begun permeating our daily lives.

    When you log into Taobao, every product, promotion, and even image is displayed in an order and manner customized by an intelligent backend based on your past behavior. The moment you open Taobao's homepage, they are all placed exactly where you'd expect to see them.

    When you log into the mobile versions of Taobao or Tmall to 'summon' the new version of AliMe, you gain access to your own personalized intelligent service assistant, enjoying a tailor-made service experience.

    'Guess What You Want to Ask' proactively determines your needs and questions based on promotions, user preferences, and shopping scenarios.

    'Service Butler' automatically pushes updates on service processes. Not only can you directly view ongoing after-sales processes, but upon completion, the results are automatically pushed to you. It integrates hotline and online data to achieve the ultimate service experience where users 'only need to come once and say it once.'

    We seem long accustomed to the convenience of Chinese e-commerce and mobile payments. For those of us living overseas, like myself, returning to China brings a stark realization of what feels like a leap into futuristic technological advancement.

    Science fiction master Arthur C. Clarke once said:

    'If a respected elder scientist says something is possible, he is probably right. But if he says something is impossible, he is very likely wrong.'

    The exponential progress of digitalization and artificial intelligence has redefined the meaning of 'foresight.'

    I want to ask a question: Is Jack Ma smart?

    How did Alibaba become a technology-leading company?

    When we think of Alibaba, keywords like 'values' and 'Double 11' come to mind. But seemingly out of nowhere, Alibaba, the commerce expert, suddenly became a technology expert:

    The transformation from 'Commercial Alibaba' to 'Technological Alibaba' actually involves two mysteries:

    Let’s revisit the Year of AI: 1956.

    That year, scientists McCarthy, Minsky, Rochester, and Shannon held the 'Artificial Intelligence' workshop at Dartmouth College in the U.S., marking the official birth of AI as a new discipline. 1956 is thus known as the Year of AI.

    Since then, AI has experienced several peaks and troughs.

    First Peak (1956–1973): The first chatbot, ELIZA, was born. Computers began winning 'man vs. machine' battles in checkers.

    First Trough (1974–1980): In 1973, British mathematician James Lighthill harshly criticized robotics, language processing, and image recognition technologies, expressing disappointment over the lack of expected returns from prior AI investments.

    Subsequently, governments and institutions worldwide reduced or halted funding, plunging AI into its first winter in the 1970s.

    Second Peak (1981–1987): XCON, a software program developed by Carnegie Mellon University to help customers automatically configure computer components, was put into practical use, marking the first step from theory to industrial application for 'expert systems.' The 'Three Giants of Deep Learning' published their backpropagation algorithm paper, sparking the deep learning trend.

    Second Trough (1987–2005): The limitations of 'expert systems'—narrow applications, single reasoning methods, and difficulty in data acquisition—became apparent. Their inability to self-learn and update knowledge bases and algorithms led to escalating maintenance costs.

    In 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov. However, the victory relied on IBM's high-speed computing resources, custom chess chips, and a team of grandmaster consultants.

    Many viewed this as a triumph of 'brute-force computing' rather than 'artificial intelligence,' as chess complexity pales in comparison to Go, which has more possible moves than atoms in the universe.

    Deep Blue developer Feng-hsiung Hsu predicted in 2002: 'Go is so difficult that it may remain unsolved for the next 20 years.'

    Third Peak (2006–Present, the AI Spring): Advances in computer performance and GPU acceleration removed computational barriers to neural networks. The 2012 ImageNet Challenge brought deep learning and big data to the forefront, attracting massive investments.

    In 2016, AlphaGo shattered Hsu's prediction by defeating the world Go champion, stunning the world.

    AlphaGo leveraged breakthroughs in deep learning and reinforcement learning, combined with Google's parallel computing power, far surpassing Deep Blue's 'intelligence.'

    In 2018, AlphaGo's creator, Demis Hassabis, founded DeepMind, aiming to develop general learning algorithms to solve real-world problems—a bold and ambitious goal.

    Looking back, we uncover a secret: AI is a somewhat vague concept.

    This vagueness has led researchers to form factions, even criticizing one another, hindering progress.

    Blay Whitby writes in Artificial Intelligence: A Beginner's Guide:

    'The field of AI problems is vast, and so are its applications... Yet AI researchers often limit themselves to one tool in their kit.'

    Why does such a young field have factional disputes?

    One reason: Researchers compete fiercely for funding.

    The good news is that in the business world, integrating different technical approaches is feasible.

    This answers one mystery: Why are commercial companies the main battleground for AI development globally?

    Now, the other mystery: How did Alibaba's values extend to Alibaba Cloud and DAMO Academy?

    One reason is 'vision.' As a non-technical outsider, Jack Ma wasn't constrained by technical details, giving him the advantage of 'broad-stroke algorithms.'

    Another reason: 'Commercial Alibaba' and 'Technological Alibaba' share the same DNA—a willingness to experiment and fail.

    Commercial Alibaba is a company unafraid of failure, as seen in its pivot from 'Laiwang' to 'DingTalk.'

    Technological Alibaba believes in the scientific method.

    Science is about continually admitting mistakes and embracing new, more generalized models.

    As Lewis Dartnell puts it: 'Science is so useful for understanding the world that the scientific method itself is the greatest invention.'

    DAMO Academy may become the Bell Labs of the AI era.

    As an independent entity within a company, Bell Labs birthed the transistor, laser, solar cell, LED, digital switch, communication satellite, computer, cellular technology, and more.

    Bell Labs scientists won 8 Nobel Prizes—7 in Physics and 1 in Chemistry.

    A legendary institution.

    Claude Shannon, the father of information theory, proposed his theorem at Bell Labs, embodying its ethos: researching science that could lead to products.

    DAMO Academy is defined by Alibaba as 'a research institution exploring future technologies for humanity's vision.'

    The name "DAMO Academy" was personally chosen by Jack Ma, reflecting the high expectations he holds for it. In Jin Yong's martial arts world, DAMO Academy represents the highest institution of martial arts studies at Shaolin Temple, embodying the essence of benefiting the world through martial arts research. This vision of observing the world and aiding humanity is also ingrained in the founding principles of DAMO Academy.

    In just two years, DAMO Academy has delivered impressive results. By September 2019, it had published over 450 papers at international top-tier academic conferences and won more than 40 world-first rankings in fields such as natural language processing, intelligent speech, and visual computing. It has become Alibaba's "technological cornerstone" for gathering scientific talent and conducting fundamental research.

    The DAMO Academy Machine Intelligence Laboratory has built a comprehensive algorithmic system over the past two years, covering speech intelligence, language technology, machine vision, and decision intelligence, achieving multiple world-leading results. These include self-developed speech recognition algorithms (DFSMN), winning five first places in the international WMT machine translation competition, and securing the championship in the WebVision competition, often referred to as the "World Cup of AI."

    Particularly noteworthy is the DAMO Academy Quantum Laboratory's completion of the first controllable quantum bit, entirely designed, prepared, and measured by the lab. Additionally, the lab is advancing research on quantum chips.

    In this era of exponential growth in digitalization and intelligence, what miracles will DAMO Academy create next?

    Since the AlphaGo craze in 2016, people have begun examining AI from various perspectives. Turing Award winner Judea Pearl argues that current AI is merely curve-fitting, not true intelligence. He believes that unless algorithms and machines can reason about causality or conceptualize differences, their utility and generality will never approach human levels.

    Nobel laureate Thomas Sargent states that AI is essentially statistics dressed in fancy terminology. USC information scientist Bart Kosko adds that AI equals old algorithms running on fast computers, emphasizing that machines don't think but function like input-output systems.

    However, these critiques may overlook an important reality: whether current AI qualifies as "true intelligence" or not, it is undeniably useful. For China, which hasn't undergone complete informatization, AI and big data are filling the gaps in digital transformation.

    DAMO Academy's research has two key features: Alibaba's AI services have been invoked over a trillion times, serving a billion people, making it the largest AI company by usage.

    The author expresses optimism about AI's future, agreeing with Demis Hassabis that a world without AI would be bleak. They also share their involvement through their edtech startup, Future Ivy, which aims to be an educational assistant for millions of Chinese families.

    DAMO Academy's cloud intelligence serves as foundational infrastructure, enabling easy access to AI capabilities. The author envisions leveraging these technologies to bring quality education to Chinese children.

    Jack Ma once said, "Animals rely on instinct, machines on intelligence, but humans must uphold their wisdom." The future of AI—whether an angel or a demon—depends on collective human consciousness and action. As in HBO's "Westworld," while AI and humans may clash, the choice to see beauty amidst chaos remains a testament to human warmth.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups