Corps wanted to turn their own and AI generated content into a protected blockchain, to compromise with gov guidelines on "faked content". But the irony is that if the people use it too with their own art, C2PA used in AI systems could protect artisans from data laundering copytheft.
AI bros are terrified of this and saying it's "surveillance" but ironically it's just the choice of the author. It really is just no different to EXIF in JPEG, though. To be against it would be like saying a person can't have cameras on their own property. This can help credit the author as a person or artisan entity. https://c2pa.org/ https://www.youtube.com/watch?v=saqAYgWanwg https://www.linkedin.com/pulse/worlds-first-digitally-transparent-deepfake-video-truepic-inc
Like how Glaze works, a irremovable watermark between works is possible, which AI can't easily remove. AI generators can also attach metadata to say it's derived without altering the author information. How it is employed however depends on the upcoming regulations. Since C2PA encodes into the image itself, and creates a NFT-like proofing that is relatively AI-mangle safe it does address content providence in new ways. Though the prospect of future AIs being unable to shred the data means an authorship blockchain of sorts, differentiating "manipulations" from source. C2PA however also has the potential of copyright trolling, so the system has to be employed into recognised servers e.g. Adobe's. But attributing the "traditional elements" of a digitally generated image also implies the metadata can possess awareness of both camera and scan sources.
Systems like SDXL by StabilityAI are to implement C2PA allegedly to "protect AI authors" but any artisans using C2PA will be ironically protected too. https://twitter.com/UltraTerm/status/1648028475167481858 https://techcrunch.com/2023/05/23/microsoft-pledges-to-watermark-ai-generated-images-and-videos/
As such this provides better awareness of what is AI online while also potentially allowing the CAI system to track the origin of AI data if the generative AI systems ensure of this through regulations. While it is the corps that do it at the demand of the governments, it does mean that corps would be tracking images wherever C2PA screening is used, which in general is social media unless C2PA is directly encoded directly into e.g. web browsers. This obviously raises concerns over C2PA being surveillance technology, but using C2PA itself is not enforceable so non-C2PA images won't be affected. Think of it as the SSL of images.