Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Generative AI Models Like ChatGPT Are Creating New Job Opportunities
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Generative AI Models Like ChatGPT Are Creating New Job Opportunities

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Artificial intelligence may be raising concerns about job security, but a new wave of jobs is emerging, focusing on reviewing the inputs and outputs of next-generation AI models.

    Since November 2022, business leaders, employees, and academics worldwide have been worried that generative AI will disrupt a significant number of professional roles.

    Generative AI enables AI algorithms to produce human-like, realistic text and images based on text prompts, trained on vast amounts of data. It can craft sophisticated prose and even company presentations, approaching the quality of work produced by academically trained individuals.

    This has undoubtedly sparked concerns about AI potentially replacing jobs.

    Morgan Stanley estimates that up to 300 million jobs could be taken over by AI, including office and administrative support, legal work, architecture and engineering, life, physical and social sciences, as well as finance and business operations.

    However, the inputs received by AI models and the outputs they generate often require human guidance and review, which is creating new paid careers and part-time jobs.

    Getting Paid to Review AI

    Prolific is a company that helps AI developers connect with research participants and is directly involved in compensating people who review AI-generated materials.

    The company pays research participants a certain amount to evaluate the quality of AI-generated outputs. Prolific recommends that developers pay participants at least $12 per hour, with a minimum wage standard of $8 per hour.

    Human reviewers are guided by Prolific's clients, which include Meta, Google, the University of Oxford, and University College London. They help reviewers by training them on potentially inaccurate or harmful materials they might encounter. Participants must agree to take part in the research.

    One research participant, who chose to remain anonymous due to privacy concerns, stated that he had used Prolific multiple times to evaluate the quality of AI models. He often had to intervene, providing feedback on where the AI models went wrong and suggesting corrections or modifications to prevent adverse effects.

    He encountered situations where certain AI models produced problematic outputs. On one occasion, he was even persuaded by an AI model to purchase drugs.

    When this happened, the research participant was shocked. However, the purpose of the study was to test the boundaries of this specific AI and provide feedback to ensure it does not cause harm in the future.

    The New 'AI Workers'

    Phelim Bradley, CEO of Prolific, stated that many new types of 'AI workers' are playing a significant role by providing input and output for AI models like ChatGPT.

    As governments evaluate how to regulate AI, Bradley emphasized, 'It is crucial to focus on issues such as fair and ethical treatment of AI workers like data annotators, the transparency and sourcing of data used to build AI models, and the risks of bias infiltrating these systems due to training methods.'

    'If we adopt the right approach in these areas, it will lay a solid, ethical foundation for future AI applications,' he added.

    In July of this year, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.

    Companies like Google, Microsoft, and Meta have been competing in the field of generative AI, an emerging area of artificial intelligence, as it frequently promises productivity improvements, making it a focal point for commercial interests.

    However, this has raised concerns among regulators and AI ethicists, who worry about the lack of transparency in how these models make content-generation decisions. More work is needed to ensure AI serves human interests rather than the other way around.

    Hume is a company that leverages artificial intelligence to read human emotions from facial and vocal expressions, using Prolific to test the quality of its AI models. The firm recruits individuals through Prolific to participate in surveys assessing whether AI-generated responses are good or bad.

    Alan Cowen, co-founder and CEO of Hume, stated: "There is an increasing focus among researchers in large companies and labs on aligning AI with human preferences and safety."

    He added: "In these applications, there is greater emphasis on monitoring. I believe we are just seeing the early stages of this technology being rolled out."

    He also noted: "Long-standing pursuits in the AI field, such as personalized tutors and digital assistants, as well as models capable of reading and revising legal documents, are now becoming a reality."

    Another role placing humans at the core of AI development is the prompt engineer. These engineers study which text-based prompts work best when input into generative AI models to achieve optimal responses.

    According to data released by LinkedIn last week, there is significant demand for AI-related jobs. Job postings mentioning AI or generative AI on LinkedIn doubled globally between July 2021 and July 2023.

    Reinforcement Learning

    Meanwhile, companies are utilizing artificial intelligence to automatically review regulatory and legal documents, but human oversight is still required. Typically, companies must sift through vast amounts of documents to evaluate potential partners and assess their ability to expand into certain regions.

    Reviewing all these documents can be tedious work that employees may not be eager to undertake, making the delegation of this task to AI models highly appealing. However, researchers emphasize that a human touch remains indispensable.

    Digital transformation consulting firm Mesh AI highlights that human feedback can help AI models learn through trial and error. Michael Chalmers, CEO of Mesh AI, stated, "By adopting this approach, organizations can automate the analysis and tracking of their regulatory commitments."

    He further added, "Small and medium-sized enterprises can shift their focus from monotonous document analysis to approving outputs generated by AI models, further refining these outputs through reinforcement learning from human feedback."

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups