Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. AI Large Models Will Create New Jobs: Human Auditors
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

AI Large Models Will Create New Jobs: Human Auditors

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 1 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    The possibility of AI replacing existing jobs has made many white-collar workers uneasy.

    However, the value of a new technology lies not in taking away jobs but in creating more opportunities for society. For example, the most eye-catching technology today—AI large models—will create a new job role: human auditors.

    With the continuous advancement of artificial intelligence, AI large models have become one of the hottest fields. AI large models refer to deep learning models with massive parameters, typically ranging from hundreds of billions to trillions, capable of processing vast amounts of data and possessing strong generalization capabilities. Currently, AI large models have achieved significant results in fields such as natural language processing, computer vision, and speech recognition.

    Among these, the most representative models are the GPT series, including GPT-3 and GPT-4. These models boast powerful natural language processing capabilities, generating high-quality text and achieving top performance in multiple NLP tasks. Additionally, models like BERT and T5 are widely used in natural language processing.

    General-purpose AI large models require massive amounts of data and computational power. The former relies on historical accumulation, while the latter demands substantial financial investment. Few companies worldwide possess both resources simultaneously.

    In the U.S., Microsoft and Google are competing for dominance in the general-purpose AI large model market. In China, Baidu's Wenxin 4.0 represents the highest level of general-purpose AI large models.

    Meanwhile, other AI companies are focusing on another market: vertical small models (industry-specific small models).

    Vertical small models, or industry-specific small models, are limited to a single vertical industry or specific tasks, such as healthcare, employment, or education. They can even target narrower tasks, like resume writing or financial report analysis.

    General-purpose large models and vertical small models are two distinct types. The former can be applied across multiple domains, while the latter is tailored for specific fields. Each has its advantages and disadvantages, suited for different scenarios.

    General-purpose large models excel in performance and versatility but require significant computational and data resources, making training costly. Vertical small models perform better in specific domains, require fewer resources, and have lower training costs but are limited in scope.

    Currently, vertical small models are developing rapidly. Various industries are creating models for specific tasks, such as medical image analysis in healthcare, risk assessment in finance, or student tutoring in education.

    General-purpose large models and vertical small models are interconnected. The former can serve as a foundation for the latter, providing rich and flexible underlying capabilities. Vertical small models can optimize and extend general-purpose models to better meet specific needs. In practice, both can work together for improved results.

    For example, despite Baidu Wenxin 4.0's vast data, it still relies on its "Wenxin Qianfan Open Platform" to collaborate with partners and develop vertical small models for specific industries.

    He Xiaodong, Vice President of JD Group and President of JD Explore Academy, noted that today's large models resemble search engine technology in the past. Just as general and vertical search engines coexisted (e.g., Google and Baidu for general search, while platforms like JD, Taobao, and Meituan had their own search engines), vertical small models can outperform general models in specific contexts.

    "Technologically, a solution must integrate with scenarios to excel. Large models are not just interfaces; they involve professional decisions requiring data and knowledge integration. Only deep integration with specific scenarios can deliver better services," He Xiaodong said.

    More importantly, AI large models are trained on vast amounts of internet content, which is not thoroughly 'cleaned.'

    With skilled prompt engineering, large language models can generate toxic content—dark, false, or untrustworthy. This necessitates content auditing at the source (during model training) and for generated outputs.

    Vertical small models can introduce domain-specific knowledge and data, with human intervention, to create trustworthy AI applications.

    AI cannot replace humans not due to insufficient computational power but because it lacks a "stance."

    In Mission: Impossible 7, the AI "Entity" decides to kill a female assassin based on the logic that sparing her might lead to betrayal. This decision disregards moral values or right and wrong.

    In reality, society requires considerations beyond rationality, such as values, sunk costs, opportunity costs, and corporate culture.

    Thus, vertical small models need "human auditors" at the training stage to ensure AI is fed correct frameworks. Auditors must also review and correct model outputs for accuracy and reliability, while monitoring and adjusting model performance.

    To ensure AI large model reliability, measures include:

    1. Data Cleaning: Preprocess and clean data to remove invalid, duplicate, or incorrect entries, improving quality and accuracy.
    2. Data Augmentation: Use techniques like data enhancement to expand datasets, boosting model generalization and robustness.
    3. Diverse Training: Employ varied training methods (e.g., different optimizers, learning rates, batch sizes) for comprehensive and accurate results.

    Unlike the "content algorithm" era, where auditors merely labeled keywords without understanding, AI small model auditors must be domain experts, ensuring 100% accuracy in data fed to models.

    From this perspective, AI is not replacing jobs but elevating their value, leading to "upskilling" and "salary increases."

    In the future, "audit specialists" will be replaced by "audit experts."

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups