The Cranky Admin

The Emergence of Social Anti-Malware

These new tools can help slow disinformation; but like any technology, there's a dark side.

A new class of technologies called Social Anti-Malware (SAM) is emerging. SAM tools use Bulk Data Computational Analysis (BDCA) to identify bots and troll accounts on social media. These tools are also being used to identify the disinformation and urban legends spread by these malicious social accounts as part of social engineering attempts.

Startups are emerging that seek to employ BDCA tools to solve social, legal and technological problems that more established technology companies refuse to -- or feel they cannot -- solve. The first round of these tools has already emerged in the form of browser extensions that can flag known social media bots or inject alerts into a Web site when known disinformation topics or urban legends are being displayed.

First generation SAM tools were Web sites like Snopes and Politifact that required users to visit the site and search for topics they were curious about. Second generation SAM tools are seeking deeper integration into the technologies that make up our everyday life. Some will be as simple as browser extensions, some will be as complex as augmented reality overlays.

The need to add SAM capabilities to our toolkit of online defenses is clear. Active disinformation campaigns are used no longer used only by Internet trolls looking to have a little fun, or by criminals looking to socially engineer a mark for some IT security compromise. Nation-states are getting in on the game.

Some notable examples have emerged in the past few months. Popular Twitter personalities Jenna Abrams and Pamela Moore turned out to be Russian disinformation accounts, both carefully crafted to build credibility and a following before being used to spread division and disinformation during the 2016 U.S. election.

Another such account -- @TEN_GOP -- claimed to be the unofficial Twitter of the Tennessee Republican Party. It too proved to be a carefully orchestrated fake. Despite repeated requests from many parties, including the actual Tennessee Republican Party, Twitter chose to not do anything about the account.

These are but three examples among many. BDCA tools can just as easily be put to use by the bad guys as the good guys, and by all indications the bad guys have already done this. The use of BDCA tools in combination with social and traditional media to achieve complex social engineering, disinformation, and societal division goals has created an arms race.

As BDCA tools become increasingly used to identity bots or flag up fake news, it's likely that whichever side of a given debate stands to lose the most will demonize the technology.  Similarly, the response of the bad guys to these tools will be predictable: they will create tools which flag objective reporting as false and/or promote disinformation accounts as truth.

As this arms race progresses it is likely that SAM tools will become a vital part not only of the internet defences of consumers, but of enterprises. The same BDCA tools and techniques that make disinformation campaigns easier will also make targeted social engineering for the purposes of network compromise easier and more automated.

One of the goals of second generation SAM tools is to help combat decision fatigue. By flagging obviously bogus information and/or questionable sources of information, these tools allow end users to offload some aspects of decision-making and critical thinking. Instead of having to analyze each and every piece of information they are presented with for validity, end users can simply discard flagged items from consideration altogether.

Of course, SAM tools present their own problems. In relying on a SAM tool to analyze what you see in your browser, you are giving that SAM tool your complete browsing history. One early SAM tool known as Web Of Trust was caught selling user information late last year.

Users and organizations are also going to have to decide which SAM tools they trust to help them restore sanity to the Internet. The biases of developers find their way into the algorithms they code, which is an increasing problem. The bots we create already have knowledge we'll never understand, and make decisions we don't comprehend. The result is serious effort being put into demystifying machine thinking.

The topic of trust is bound to be a touchy one for SAM tools providers. If they see wide adoption, we will effectively be allowing them to become the authoritative lens through which we view the world. Putting any accidental biases or BDCA errors aside, a popular SAM tool with a large user base could be purposefully used in a malicious fashion; for example, to sway political opinion on a topic important to the SAM tool provider.

While there have already been some early attempts at second generation SAM tools that integrate into our various systems, 2018 will be the year of explosive innovation in this area. Take care when choosing your SAM vendor, and even more if you choose to browse the Web without one.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.


Subscribe on YouTube