Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. When the World's Largest Image Library Decides to 'Embrace AI'
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

When the World's Largest Image Library Decides to 'Embrace AI'

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    A speck of dust from the times can feel like a mountain on an individual's head.

    In the wave of the AI era, a single ripple can either become a tsunami or a 'golden opportunity' for any company.

    As the world's largest image agency, Getty Images, which champions 'authenticity,' once strictly prohibited AI-generated content in its library. However, recently, Getty launched its own AI-generated tool, Generative AI by Getty Images.

    What caused Getty to make a 180-degree turn in its stance? As a content trading platform, could Getty become another giant in the AI era?

    All these questions were addressed by Getty Images CEO Craig Peters in a recent in-depth conversation with The Verge's editor-in-chief Nilay Patel at the Code Conference. Peters explained how Getty's AIGC tool assists creators with copyright and revenue, as well as the reasoning behind it.

    Below is an excerpt from Craig Peters' dialogue, compiled and edited by GeekPark:

    Nilay: First, let's start with the news, because many people assumed you'd oppose AI 'stealing' content, but you announced an AI tool this week, which is quite puzzling.

    Craig: Yes, we have launched an AI image generation tool—Generative AI by Getty Images. This product also incorporates the Edify large model provided by Nvidia, which is available on Nvidia's generative AI model library, Picasso.

    Getty's AIGC tool

    Getty's AIGC tool | Getty Images

    We chose to collaborate because we wanted to leverage Nvidia's computing power to create a 'unique' AI tool.

    First, this tool respects the 'intellectual property' of its training library. It is 'licensed'—it is trained exclusively on Getty Images' creative content, and we also pay 'compensation' to the creators of this content. Therefore, as we generate more revenue from this service, these creators will also be rewarded for their contributions to the tool.

    Secondly, it fully complies with commercial safety standards, ensuring no third-party intellectual property disputes. It cannot create 'deepfake' content. It does not know what the 'Pope' is, nor what [Balenciaga] is, and certainly would not mix the two.

    We believe the quality of the first version of the tool is outstanding, with image generation and environmental rendering exceeding expectations.

    Nilay: Do you think people in your 'market' want such photos?

    Craig: Absolutely. First, generative AI (AIGC) didn't appear suddenly. It has existed for years. Our partner Nvidia actually launched the first text-to-image GANs model. So, we knew it was coming, and the question we posed to our customers was: 'How do you plan to use it? What do you need?'

    Image

    An image generated with the prompt 'A well-dressed person sitting on the floor at a coding conference' | Getty Images

    We create services for our clients, enabling them to create at a higher level, saving them time and money, while eliminating their intellectual property risks.

    The latter is crucial in the field of AI. Every word we hear from our clients is: 'We want to use this technology.' The situation varies from media clients to institutional clients and corporate clients. They all want to unleash their creativity with these tools, but they also need to ensure they do not infringe on third-party intellectual property rights.

    The situation also differs globally. If you have an image that generates a representation of a third-party brand or someone's name and likeness, such as Travis Kelce or Taylor Swift, that becomes an issue.

    There are also more 'subtle' issues in intellectual property, such as displaying images of the Empire State Building. You could be sued for that. Tattoos are also copyrighted. Even fireworks are copyrighted—Grucci Brothers owns the copyright to that smiley-face firework.

    Therefore, we incorporate many safeguards to ensure clients are 'absolutely safe' when using our services. Additionally, we are prepared to offer 'compensation' just in case, although we believe there won't be any issues.

    Nilay: There's another side to this. You know all the training data is yours. Then you can say, 'Alright, we're going to pay our creators.' But how do you 'standardize' this with a 'model'? — 'We generated this image. Someone paid us for it, now how much should we pay you in dollars?'

    Craig: In our case, it's at the 'pixel' level, but I think this question was initially raised around 'audio.' The answer is, these 'models' don't exist yet. We've tested many 'models,' but found them insufficient for attribution.

    Therefore, our current approach to allocating 'compensation' is based on two factors: What proportion of the training set does your content represent? And over time, how does that content perform in our licensed world?

    This represents a 'fusion' of both 'quality' and 'quantity.'

    Nilay: So, you're just using a fixed 'model?'

    Craig: Yes, we will continue to evaluate this over time. If we find a fairer approach, we will certainly adopt it. In fact, we are constantly exploring technologies that can achieve this, but for now, I don't think it meets our objectives.

    Image generated with the prompt 'Pop stars and Super Bowl players in a convertible'
    Image generated with the prompt 'Pop stars and Super Bowl players in a convertible' | Getty Images

    Nilay: The dynamics here are quite interesting. So, if a client wants to generate a photo or something for an advertising campaign, they might use Getty's tool instead of hiring a photographer. Would it be cheaper than hiring an actual photographer?

    Craig: That remains to be seen. But I think this is a completely different model. It's a 'cost-per-call' model, a generative model. You've tried the tool; I think it's a very good one that guides you through the prompts, and you get high-quality, high-resolution images right from the start.

    But this is 'mechanical' work. I believe there is a difference compared to authorizing 'pre-shot,' as the latter involves spending time researching with clients.

    Ultimately, we must strive to save clients' 'time,' as it is the most valuable resource they possess.

    I think in some cases, this approach can be creative but not necessarily the most time-efficient. In many situations, our 'pre-shot' process can be more authentic and efficient because you are 'searching'—you don’t pay for the search. You get a variety of content, including real people and real locations. Many brands care deeply about this. However, it might be a more 'efficient' process.

    Therefore, I believe we will come to realize this over time.

    Nilay: Getty stands out in the field of photography. You hire photographers and send them to dangerous locations. You produce a vast amount of news photography. Have you heard from your creators that AIGC is a 'problem'?

    Craig: I wouldn’t say we’ve heard from our creators that AI is a problem. We represent over 500,000 photographers worldwide. So, as you can imagine, within this audience, there are many different perspectives and opinions. Multiply those by 1,000, and you get even more.

    We hear concerns about 'intellectual property'—ultimately, people want to use their 'intellectual property' through subscription services or other models to train systems and create value.

    At the core, people want to resolve this issue. But what we hear from customers is that they want to create and use these tools.

    Our stance from the beginning has been this: We believe AI can bring constructive benefits to society as a whole, but certain considerations are necessary, which is why we have been seeking 'transparency' in training data.

    We believe creators and intellectual property owners have the right to decide whether their materials are used for training, and the creators of these models should not be shielded by provisions like Section 230. If you create and release these models, you should bear some responsibility.

    To reiterate, as part of the media, the last thing we want is to develop a tool that can truly produce 'deepfakes.'

    Nilay: There are some big ideas here. In my brief stint as a copyright lawyer and longer career as a journalist, I’ve found that no one really cares about copyright law. They don’t care about intellectual property; they care about 'money.' But now, money is downstream of some very thorny copyright issues.

    You just heard Microsoft's Chief Technology Officer, Kevin Scott, say that both he and Microsoft believe, 'All of this is built on the foundation of fair use, and the argument for fair use will ultimately succeed, or be modified in some way.'

    Image generated by Stable Diffusion with Getty's copyright mark
    Image generated by Stable Diffusion with Getty's copyright mark | the verge/Stable Diffusion

    However, in many ways, you stand on the opposite side of this issue. You are suing Stability AI for using a large number of Getty's images. If you win, perhaps the entire 'structure' will collapse. Have you considered the stakes of this lawsuit?

    Craig: This lawsuit is of great significance. We filed it for a reason. We fundamentally believe that intellectual property owners have the right to their content, whether it is used in training datasets or not. If they choose, they should have the right to be compensated.

    I don't accept what Kevin said—that I've read Moby Dick, so, first, these computers are not human; second, they are actually corporate entities making money, which is your point. In many cases, they use these technologies to target existing markets, much like the Andy Warhol case...

    This is why we're here. We care more about intellectual property than others do.

    Nilay: It's clear you're currently in litigation with Stability. But many other companies—Microsoft, Google, OpenAI—might be using Getty Images' content to train their AI models. Have you engaged with them about this "issue"?

    Craig: We're having constructive conversations. Whether they'll be productive remains to be seen.

    First, I think there's a layer of PR nonsense here—like joining a group just to whitewash your reputation by association while taking no real action. That's not genuine engagement.

    We can disagree on legal interpretations, but our model demonstrates one truth: quality inputs create better outputs. This approach yields more socially responsible products—and I believe businesses will adopt it.

    We are indeed discussing this issue, but we won't deviate from the fundamental point: we believe that if you are an intellectual property owner, you should have the right to decide whether your content is used in AI training, and you should be 'compensated' for it. This doesn't mean trivial 'checks'—this is the 'cornerstone' of these tools.

    Nilay: I'd like to touch on two more topics: you've talked a lot with me about 'authenticity.' You mentioned 'deepfakes.' Getty has indeed published some of the most important photographs of our time. Historically, this has been Getty's role in our culture.

    You told me that simply labeling these as 'authentic' isn't enough—there's another issue here. Could you describe what you see as that other issue?

    Craig: I think the problem now is that you can't tell what's 'real.' In a world where AI can produce content at scale, where you can spread this content across breadth, scope, and time, ultimately, 'authenticity' gets squeezed out.

    Now, I think our brand helps break through this barrier. I believe our 'reputation' also plays a role, and ultimately, this has value. But I do worry about a world where...

    I heard that last year, the number of AI-generated images exceeded those taken by cameras. That's astonishing. Consider where we are on the AI adoption curve—it's growing exponentially.

    Moreover, when you think about the existence of malicious individuals, organizations, and institutions in this world, it worries me. When images of the Pentagon surfaced, our newsroom was in an uproar, filled with questions like, 'Is this real?' We received numerous calls asking us to 'verify' them. Now, let's apply this to the 2024 elections.

    DeepFake images have blurred the line between real and fake, which is dangerous | Maverick AI

    Nilay: Given that we've already foreseen the competition between your own real images and AI-generated ones, have you taken any special measures ahead of the 2024 elections?

    Craig: Yes. We're in discussions with AP, AFP, partners, and even competitors about 'What should we do?' While we're taking steps, there's no perfect solution yet.

    And the election date isn't changing—it's getting closer. So I think it's crucial to push technology forward under the premise of 'Let's accelerate and break conventions...'

    Nilay: Are you part of the Content Authenticity Initiative group?

    Craig: We're discussing it. Frankly, we haven't adopted it yet. We're not sure if it's the right approach now.

    First, it places the responsibility and investment on creators of original, authentic content rather than on the platforms and tools generating synthetic content. We think this is somewhat backward—even fundamentally regressive.

    Generative tools should invest in creating proper solutions for synthetic content. But currently, it's mainly about 'metadata,' which can easily be stripped away.

    You are our clients. You use our images. When you place our 'metadata' into your content management system (CMS), you immediately strip it out because it makes things lighter, and page loading and everything else becomes faster.

    This makes sense because you're 'competing' in search engine optimization (SEO) and everything else you need to do. So, you strip it out.

    Therefore, I think what we need to focus on... This is also where we collaborate with Kevin, Microsoft, and their team, who have made commitments to the White House regarding identifying AI-generated content, which is very encouraging. Because we also want to do the same thing, but we want to do it in a way that truly captures the essence.

    Nilay: One last big thing. When we first started talking, you discussed with me how the photography market inevitably underwent a permanent change with the advent of the internet—more people could create, and our distribution platforms changed. Pricing collapsed. I know many professional photographers whose careers vanished with the rise of the internet.

    Do you feel the same way now? You built a business to address this—you changed this 'business.' Do you think it's the same now? Is the scale of change the same?

    Craig: I think there’s obviously a lot of change, but what we do still has value.

    Whether it's tools that spark creativity or content with high 'authenticity,' they can engage end-users in a meaningful way and move them—if you're a media company, inspiring them to understand an issue; if you're a business, motivating them to truly engage with your brand or product. I don't believe this will fade away.

    I think this presents unique challenges, and navigating and solving them is incredibly interesting.

    The most important factor in enabling photographers, videographers, writers, and others to create more work is—we involve more creators in the creative process. Just as we introduced this AI tool not to displace creators but to support them.

    This is our ultimate goal. If we achieve this, I believe the world will become a 'better place.' Companies like Getty Images will thrive because of it, and so will those who collaborate with us.

    However, if we attempt to 'eliminate' creators, I believe it would inevitably lead to a 'tragic' world—a 'disaster' for both our business and those trying to make a living.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups