Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Apple Develops Generative AI Technology HUGS
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Apple Develops Generative AI Technology HUGS

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    On December 20, Apple's machine learning research team published a blog post showcasing a new generative AI technology called HUGS, which can analyze short videos in 30 minutes and then map and create new actions and perspectives.

    Apple researcher Anurag Ranjan tweeted that HUGS stands for Human Gaussian Splats, using machine learning and computer vision to create realistic human elements with minimal input data.

    Apple stated in its official introduction that neural rendering technology has made significant progress, but it is still most suitable for static scene photogrammetry and cannot yet be extended to humans moving freely in the environment.

    HUGS uses 3D Gaussian Splatting technology to create movable humans in scenes.

    The method itself requires a small amount of subject video, usually moving in the scene and displaying as much surface as possible for the system to work.

    In some cases, the technology has very low requirements for source input data, with a minimum of 50 to 100 frames of monocular video, equivalent to 2 to 4 seconds of 24fps video.

    Apple claims that the system has been trained to "unravel static scenes and fully animatable human avatars in 30 minutes."

    Apple stated that while the SMPL body model is used to initialize human Gaussian models, it cannot capture every detail. For unmodeled elements such as clothing and hair, the process can deviate from the SMPL model to fill in the gaps in model capture.

    Apple officially stated that from training videos to rendering output at 60fps, it can complete human body modeling and "state-of-the-art rendering quality" animations in half an hour, which is 100 times faster than other methods like NeuMan and Vid2Avatar.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups