At a time when the term “fake news” has become a household name thanks to its repeated use by President Donald Trump, deepfakes — i.e., seemingly realistic videos that are in fact manipulated — can further escalate the problem associated with distrust of media. Technologists are looking at the inherent nature of blockchain as aggregators of trust to put more public confidence back into the system.
Truth is increasingly becoming a relative term. When everyone has their own version of the truth, democracy becomes meaningless. The advent of deepfakes is surely pushing society to a point where facts can be manufactured according to one’s opinions and objectives — because in just a few years, the naked eye or ear will no longer suffice in telling whether a video or audio clip is genuine. Humanity has a huge problem to solve.
Bring together “deep learning” and “fake” and you get “deepfake” — a Photoshop job on steroids that makes use of artificial intelligence. If the algorithm of a deepfake has enough data (or footage) of an existing subject, someone else can use the tech to manipulate the video and make it look like the subject is saying or doing pretty much anything.
Social implications of deepfakes
Deepfakes have the potential to change public opinions, skew election results, trigger ethnic violence or escalate situations that can lead to war. Propaganda and fake personal attacks are nothing new but with deepfakes, the strategic contortion of information takes on a different dimension. Fueled by rapid advancements in AI and the viral nature of social media, deepfakes could potentially become one of the most destabilizing technologies to haunt humanity.
Deepfakes can become game-changers for two reasons. The first is that they represent the level of sophistication that can now be achieved through AI. But the second, more important reason is that they also represent a democratization of access to technology.
Related: Blockchain and AI Bond, Explained
The implications of deepfakes don’t even have to be social; they can be personal too. There is an anonymous Reddit account that became infamous for creating fake AI-assisted videos of celebrities, which are often pornographic. Although the creator’s subreddit was banned in February 2018, its videos remain in the public domain.
However, the popularity of deepfakes has spawned several other people in the same business. Celebrities are not the only ones being targeted. Widespread availability and the ease of use of the software has made it possible for anyone to generate a “revenge porn” video.
Several startups working on solving the deepfake problem have since risen, with Ambervideo.co being one of the most prominent firms. Amid the threat of fake videos delegitimizing genuine recordings, Amber is building a middle layer to detect malicious alterations and has developed both detection and authentication technology.
For detection, Amber has a software that looks at the video and audio tracks as well as the aspects within them for signs of potential modifications. Amber is training its AI to pick up on the specific patterns that are unavoidably left behind while altering a video.
The problem with this method is that it is strictly reactive, as the AI only learns from past patterns. Newer deepfake algorithms will go virtually undetected by this retroactive approach, so detection methods are deemed to lag behind the most advanced creation methods.
This is where Amber’s authentication technology comes in: Cryptographic fingerprints are imprinted on the video as soon as it is recorded. Amber Authenticate uses blockchain infrastructure to store hashes every 30 seconds, and thus any alterations to these hashes can hint at potential tampering.
Apart from software solutions like Amber, there is a need for hardware-based solutions too, and companies like Signed at Source are providing it by giving stakeholders the capability for integration with cameras to automatically sign captured data. A deepfake video with the very same signature as the victim’s camera is highly unlikely, signifying that one can prove which video was recorded by the camera and which one was not.
On Oct. 3, 2019, Axon Enterprise Inc., a tech manufacturer for U.S. law enforcement, announced that it is exploring new data-tracking technology for its body cameras and will rely on blockchain technology to verify the authenticity of police body cam videos.
Axon is not the only organization that has been working on issues associated with deepfakes. The Media Forensics program of the Defense Advanced Research Projects Agency, commonly known as DARPA, is developing “technologies for the automated assessment of the integrity of an image or video.” To help prove video alterations, Factom Protocol has come up with a solution called Off-Blocks. In an email to Cointelegraph, Greg Forst, director of marketing at Factom Protocol, said:
“At a time of heightened scrutiny around the veracity of news, content, and documentation, the rise of deepfake technology poses a significant threat to our society. As this phenomenon becomes more pronounced and accessible, we could arrive at a situation whereby the authenticity of a wide array of video content will be challenged. This is a dangerous development that blurs the line around digital identity — something that should be upheld with the most rigorous security measures.”
Forst believes that it is also up to developers, blockchain evangelists and cybersecurity experts to explore different avenues to mitigate the risks stemming from deepfakes. Proof of authenticity of digital media is crucial in eliminating forged content, although the solutions are currently inept at providing history tracking and provenance of digital media.
Is blockchain the solution?
Taking the example of Axiom’s police body camera, videos are fingerprinted at the source recorder. These fingerprints are written on an immutable blockchain that can be downloaded from the device and uploaded to the cloud. Each of these events are written on a smart contract that leaves behind an audit trail.
The technology used by Axiom is called a “controlled capture system” and has far wider applications than police body cameras. It extracts a signature from the content source and cryptographically signs it — thereafter, the recording is verifiable.
However, due to video encoding, it is unlikely to have the original data even in ideal circumstances. Even if a minor change was made to the video, the signature is no longer valid. Encoding is not the only problem — if someone recaptures the video using another device than the original camera, the original video data will be inaccessible.
Google’s Content ID might be the solution to this. It is a service that was originally developed to discover copyright violations, but can potentially be used to detect deepfakes. After spending over $100 million developing their systems, Google was able to create an algorithm that matches a user-uploaded video to a set of registered reference videos, even if it’s just a partial or somewhat-modified match.
This will only work if the deepfake is similar enough to the original. Additionally, keeping enough fingerprints and tweaking the algorithm to detect such changes bears a dramatic impact on data and computation requirements. Talking about how blockchain can be the solution to deepfakes, Frost of Factom added:
“When it comes to deepfakes, blockchain has the potential to offer a unique solution. With video content on the blockchain from creation, coupled with a verifying tag or graphic, it puts a barrier in front of deepfake endeavors. […] Digital identities must underline the origins and creator of the content. We could see prominent news and film industries potentially seeking this kind of solution but it gets very tricky as potential manipulators could sign up as verified users and insert a deepfake file in the system. Bad data is still bad data even if it’s on the blockchain. I tend to think a combination of solutions is needed.”
Often, these detection techniques won’t be given a chance to perform, given the ability of viral clips to cause damage without having been verified. A public figure’s reputation can be damaged beyond repair, ethnic or racial tensions escalated, or a personal relationship ruined prior to the media’s verification. These are some of the major drawbacks of the rapid and uncontrolled spread of information.
All forces are coming together to fight deepfakes
In a conversation with Cointelegrpah, Roopa Kumar, the chief operating officer of tech executive search firm Purple Quarter, believes that technology cannot be good or bad:
“Take an example of Nuclear energy. It can be used to power the homes of millions of people. When in the wrong hands, it could even be used to kill millions. Technology by themselves don’t have any moral code, but humans do. Deepfakes can be used to make entertaining applications that can soon be on your mobile phones. But the same applications can ruin lives and the fabric of society if used by malicious actors.”
Trust in established centralized institutions like governments and banks is arguably low. Trust-minimization is a key property of blockchain. However, blockchain — or technology as a whole — cannot take on the sole responsibility of fighting deepfakes.
Many forces have to come together in this effort. Creators and developers working on deepfake technology will have to post their codes online for free so that it can be cross-checked by third parties. Regulators should also look into how they can supervise this space. Most importantly, it is up to the masses to be well-informed about such technology and remember that all consumed information should be taken with a grain of salt.