It has become clear that combating disinformation requires a multifaceted approach including detection of manipulation, and the establishment and authentication of a source of media, or authenticating media provenance. In short, people need to know what they’re seeing is content that was actually produced by its purported source.
Brand marks, styles, sets and other traditional indicators of trust continue to be critical but are no longer enough on their own to ensure people of content legitimacy. Altered or synthetic material can at times appear to come from reputable journalistic entities can make false or misleading material look credible. Another layer of authentication is now required.
Securing provenance is complex and requires buy-in from multiple organisations. To address this, an initial effort is being led by a coalition of the BBC, CBC/Radio-Canada, The New York Times, and Microsoft. We have named the effort Project Origin.
Project Origin was established to provide a platform for collaboration and discussion among a set of partners on the creation and adoption of a new media provenance tracking process, aimed initially at news and information content. At scale, this process could encompass traditional publishing (electronic and print), information technology, social media and consumer software. We are planning for a multiparty stakeholder, cross-organisational collaboration around combating disinformation.
Positive authentication of the provenance of legitimate news stories will help by making it easier to identify manipulated and synthetic audiovisual content. The Origin process is conceptually designed to work with text, video, images and audio content.
The Origin collaborators have agreed to develop a framework for an engineering approach, initially focusing on video, images, and audio. We hope this work could be helpful in developing a global standard for establishing content integrity.
“Origin”, when adopted, could provide:
We have made good initial progress, including the development of a reference architecture and papers on this topic to be shared in various technical forums. We are preparing for engagement with social media platforms and news aggregation service providers to trial the methods. This engagement will involve testing engineering approaches and workflow scenarios alongside peer review of technical specifications and contributions to developing and adopting technical standards.
Clearly, social platforms and aggregators already have mechanisms of their own by which to assess and deal with disinformation, but the distinguishing feature of Origin is that it is intended to move beyond platform-specific tools and methodologies to an industry-wide and open standards. Our aim is to explore a standard that could offer consumers real-time information about the integrity of media content and to allow the platforms to read signals on such media integrity and act on them by flagging or removing as appropriate.
The technical approach and standards aim to offer publishers a way to maintain the integrity of their content in a complex media ecosystem. The methods, we hope, will allow social platforms to be sure they are publishing content that has originated with the named publishers--a key in the fight against the imposter content and disinformation. Most importantly, the methods could help shield the public against the rising danger of manipulated media and “deep fakes,” by offering tools that can be used to better understand the disinformation they are being served and help them to maintain their confidence in the integrity of media content from trusted organisations.
The intention of the Origin approach is to establish a chain of provenance from the point of publishing to the point of presentation. We intend to accomplish this via cryptographically-secure signatures and hashes where these are preserved in the metadata of transcoded files. Where it is not, our approach could leverage fingerprinting and watermarking techniques - or a combination of both approaches. Media data and their cryptographic hashes can be registered on a ledger, which is tamper-proof and secured by distributed ledger technology.
On the consumer side, initially we are focused on exploring user experiences via browser extensions, overlay features in a browser and a variety of visual indicators that will provide information and insight to help build trust and understanding of authentic and manipulated content. The work will include developing and hardening methods so as to ensure that such indicators cannot be directly manipulated.
For the long-term goal, which could include, for example, an open Web standard API, social media platforms could be able to choose the information they show on their page and how (a pull model). In the early stages of the project could be delivered, potentially, through a browser extension.