Malicious Influence, Deepfakes & Political Security in 2019

AI and machine learning represent a big step forward in how computers learn. It is now possible to quickly and automatically analyze more complex data and deliver faster, more accurate results. These technologies offer organizations a better chance of identifying opportunities, or avoiding unknown risks. On the flip side, they also make it easier and cheaper for criminals with technical savvy to undermine political security.

Right now, many of us rely almost entirely on the internet to get our daily news. What happens when none of that information can be trusted? It is much easier today to contribute to the spread of misinformation than it was in the past. With AI and machine learning, malicious hackers can act as a destabilizing force within government and civilian life.

Deepfakes

The old saying that “the camera never lies” no longer holds true. Today, software can analyze the appearance of someone in one video and transfer that person’s facial movements onto another person. This generates footage of the second person doing and saying things they never actually did or said. Fabricated videos, referred to as “deepfakes,” call into question how much we can trust the authenticity of shared information online.

081018_MT_video-manipulation_inline_730 Source: A new computer program generates eerily realistic fake videos, Science News. 

 Redditor UnobtrusiveBot put Jessica Alba’s face on porn performer Melanie Rios’ body using FakeApp. Source: We Are Truly Fucked, Motherboard.

In a test of “deepfakes,” the fabricated videos fooled, on average, 50% of viewers. Even when participants were watching real clips, 20%, on average, still believed the clips were not genuine.

 Could you spot the deepfake video? 

5ad6463c817df627008b459e-750-374

Highly realistic videos of political leaders seeming to make inflammatory comments they never actually made are now possible.

Realistic fabricated videos make it even more difficult to distinguish between what is real and what is fake. A statement from a political leader or other person of influence, and one which they never made, would spread instantly. This would result in unrecoverable repercussions once the information has gone viral.

Automated Disinformation & Influence

AI not only makes it possible to create high-influence content but It also makes it easier to share targeted content to susceptible crowds online. This was a major concern throughout 2018 and will continue throughout 2019.

There’s a lot of money in building personalized filters. They have become extremely sophisticated over the years. Businesses online generate content based on preferences and ongoing user behavior in order to make digital experiences relevant and useful. Personalization allows Netflix to generate recommendations, and helps populate Apple Music playlists. When a mismatch occurs, it is even natural to feel offended and complain about the inaccuracy.

The same technology that makes personalization possible makes it easy for criminals to generate automated disinformation campaigns based on individual motivations. Machine learning technology enables automated campaign refinement over time, making them more effective. News of election tampering and the Cambridge Analytica scandal brought into focus the damage that cyber hackers can inflict on society.

Use Cases:

  • Media platforms’ algorithms and filters are used to drive users towards, or away, from certain content to manipulate user behavior.
  • Presenting individuals in swing districts with personalized messages in order to affect their voting behavior has substantial real-world outcomes.
  • AI-enabled analysis of social networks are leveraged to identify key influencers. Influencers are approached with malicious offers or targeted with disinformation. They amplify the message to their own networks.

Automated disinformation and influence campaigns in the hands of criminals with malicious intent are even harder to prevent. Realistic content like “deepfake” videos are powerful pieces of propaganda that undermine trust and spread misinformation.

Denial-of-Information Attacks

In 2019, bot-driven, large-scale information generation attacks will continue to swamp information channels with false or distracting information. When this happens, it is more difficult to get real information, and it is harder to distinguish fact from fiction.

After an active shooter event, social media users have a difficult time finding the truth through the noise. The large number of false allegations makes it harder to find out what is really going on. This was seen in the cases of the Parkland and YouTube shootings, among others.

While currently attributed to trolling behavior, denial-of-information attacks will only become more extreme and harder to trace. AI and machine learning will expand the effectiveness of these attacks. Technology providers are responsible for policing this themselves and recent events show that they are not up to the task.

How Can We Help?

A 2017 “threatcasting report” by the Army Cyber Institute at West Point and Arizona State University’s Threatcasting Lab suggested that “although clearly more research is needed, it is imperative to take immediate pragmatic steps to lessen the destabilizing impacts of nefarious AI actors. If we are better able to understand and articulate possible threats and their impacts to the American population, economy, and livelihood, then we can begin to guard against them while crafting a counter-narrative.”

Artificial intelligence and machine learning will expand the risk of existing threats and will generate new challenges for the security industry. More importantly, this impacts people, citizens and end users. Not only will it be harder to spot fake news, this leads to questioning even real, factual information.

To date, the industry has been focused on the unauthorized access of data. This is changing. Instead of stealing information for ransom, criminal hackers now attempt to modify data while leaving it in place. This is a more effective way to undermine public trust and spread misinformation online. It is also much more difficult to track the guilty parties.

Collaboration between technology and security is crucial moving forward. Determining the veracity of digital content is an important component of investigating both physical and digital threats through online sources. With the thred platform, security professionals are able to safely locate relevant content on the surface, deep and dark web and investigate them fully to their source. 

Take the next step to fully protect your VIPs, employees, intellectual property, brand and facilities. Book a meeting today to learn more.