We know that artificial intelligence has grown significantly in recent months. Sometimes it seems difficult to distinguish fact from fiction: are the images we see real or generated by artificial intelligence? The problem is that this type of content invades social networks to the point of encouraging misinformation among internet users. Therefore, it is the responsibility of the various platforms to take this danger into account and point out to their users what it is all about. And that’s exactly what LinkedIn intends to do! The professional network makes the decision to label any image or video generated or modified using generative artificial intelligence, in particular by joining C2PA (Coalition for Content Origin and Authenticity).
C2PA: AI-generated content will be flagged
C2PA (in French, Coalition for Content Provenance and Authenticity) was founded by an alliance of Adobe, Arm, Intel, Microsoft and Truepic. This is to tackle the wrong information that is being spread on the internet. Of course, artificial intelligence is one of the observational tools. C2PA allows publishers, creators and even consumers to “to trace the origins of different types of media”. This includes that for content generated by artificial intelligence, everyone has the possibility to find out the author, the date of creation as well as the different tools used to achieve this result. Therefore, it becomes easy to determine whether the picture or video is authentic or not.
After LinkedIn joined C2PA, the social network should therefore benefit from an invisible marker called Content Credentials that allows all this information to be known. Icon “Cr” (for content credentials) will therefore be in the top left corner of any AI generated image or video. By clicking on it, users will be able to access the previously mentioned data.
LinkedIn tracks posts from other platforms
This is not the first time that a platform has decided to join alliance C2PA or at least follow the path of disinformation hunting. Before LinkedIn, Google announced that in late 2023 it will launch a tool to detect images generated by artificial intelligence. Soon after, YouTube also mentioned videos created or modified using artificial intelligence. TikTok and Meta are also working in this direction.
Since various platforms are fully aware of the dangers of misinformation associated with generative artificial intelligence, there is a high probability that more measures will be taken to regulate its use in the months or years to come.