First they came for the bots: US academics make case for 1984-style silencing of any dissent.
Helen Buyniski, RT
21 Aug, 2019 01:57
With the "Russian meddling" theory of Trump's victory on life support heading into 2020, US academic researchers have heeded the patriotic call and put forth a new definition of "disinformation" that includes inconvenient truths.
Social media platforms must expand their definitions of "coordinated inauthentic behavior" beyond the usual bots-and-trolls model to include conversations about topics harmful to the state if they hope to curb the spread of disinformation on their platforms, a trio of University of Washington researchers insist in a
paper released ahead of the 2019 Conference on Computer-Supported Cooperative Work. To help in this quest, the researchers have redefined "disinformation" to include truths arranged to serve a purpose.
“Evaluating disinformation is less about the truth value of one or more pieces of information and more about how those pieces fit together to serve a particular purpose.”
Such an Orwellian redefinition would include the lion's share of journalism, especially opinion journalism, and sure enough, the researchers have their knives out for those who become "unwitting agents" in the spread of disinfo by writing based on anonymous tips – otherwise known as "reporting."
All it takes is one article on a "conspiracy theory" to cause a rift in society, the researchers warn, as a single story spreads to multiple outlets and then throughout the social media infosphere. But governments may spend billions of dollars on manipulating public opinion over social media, because it's OK to lie, as long as you're helping your country.
The paper tiptoes around propaganda campaigns run by the "good guys" – acknowledged US operations like the notorious pro-Clinton Correct the Record, while New Knowledge, rather than being called out for its fake Russian bot campaign to influence the 2017 Alabama senate election, is cited as an academic source!
Understanding that bot- and troll-hunting has limited use, the researchers focus on "actors who are not explicitly coordinated and, in some cases, are not even aware of their role in the campaign" - i.e. ordinary social media users with opinions the researchers don't like.
One "case study" examines content "delegitimizing" the White Helmets while neglecting to mention that the group and the publicity surrounding it are, themselves, part of a well-funded western influence operation against the Syrian government (with a sideline in terrorism and head-chopping). The researchers complain that anti-WH voices were not the expected bots and trolls but included "western journalists" and overlapped with "'anti-war' activism" – as if "anti-war" was an artifact of a bygone era when one could, realistically, be against war. They complain that not enough accounts retweeted pro-White Helmets articles and videos – essentially that the problem here was not enough of the right kind of propaganda.
Conspiracy theories especially get under the researchers' skin, as they have trouble untangling "conspiracy pushers" from those following mainstream news and seem incapable of realizing that people looking for answers in the aftermath of a tragedy are inclined to look in multiple places.
The researchers warn their peers not to minimize the effects of Russian "influence operations" in 2016, even if their analysis shows them to be minimal – clearly, they aren't looking hard enough (i.e., if you don't see the effects, it's not that they aren't there, it's that you aren't using sophisticated enough instruments. May we interest you in this fine Hamilton68 dashboard?).
Scientists are cautioned never to allow their hypothesis to color the way they report the results of their experiments. If the lab doesn't show something, it isn't there. But these researchers are not scientists – they, like the New Knowledge "experts" they so breathlessly cite, are propagandists. They are the droids they are looking for. At one point, they even admit that they "wrestl[ed] with creeping doubt and skepticism about our interpretations of [operations promoting progressive values] as problematic – or as operations at all." Skepticism, it seems, lost.
Social media platforms are warned that their current model of deplatforming people based on "coordinated inauthentic behavior" leaves much to be desired. If they truly want to be ideal handmaidens of the national security state, they must "consider information operations at the level of a campaign and problematize content based on the strategic intent of that campaign." It's not whether the information is true, it's where it came from – and what it might lead to – that matters. Such a model would complete the transformation of platforms into weapons in the state's arsenal for suppressing dissent, and the researchers acknowledge they might be at odds with "commonly held values like 'freedom of speech'" (which they also place in quotes), but hey, do you want to root out those Russian influence operations or not? We've got an election to win!
When at first you don't succeed, redefine success. None have heeded this maxim better than the Russiagate crowd and their enablers in the national security state, and academic researchers have long provided the grist for these propaganda mills. But the sheer chutzpah of expanding the definition of disinformation to include truths arranged to have an effect – a definition that could include most of journalism, to say nothing of political speeches and government communications – is unprecedented.
-------------
YouTube axes anti-protest channels as US Ministry of Truth battles China over Hong Kong.
Helen Buyniski, RT
23 Aug, 2019 02:30
YouTube has disabled 210 channels for posting content related to the Hong Kong protests “in a coordinated manner,” following in the footsteps of Facebook and Twitter in restricting its arbitrary censorship to pro-China accounts.
“Channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong,” Google threat analyst Shane Huntley claimed in a blog
post on Thursday, adding that the Google team’s “discovery” was “consistent with recent observations and actions related to China announced by Facebook and Twitter.”
Translation? The channels were “sowing political discord” on behalf of the Chinese government, and had to be stopped. How did Google know it was the Chinese nefariously attempting to poison the minds against the protesters? The “use of VPNs” and “other methods of disguise” – widespread in the era of mass surveillance – was all the proof required to wipe the channels out of existence.
Twitter got the anti-China censorship ball rolling earlier this week, in perhaps the first-ever social media preemptive strike “proactively” deplatforming hundreds of thousands of accounts for the capital crime of “sowing discord.” Their crimes included “undermining the legitimacy and political positions of the protest movement on the ground.” One could argue that the protests themselves are a form of political discord, but resistance is futile when charged with such an inchoate offense.
None of the social media platforms have ever defined what exactly constitutes “attempting to sow discord,” though a common thread running through the mass deplatformings of the past year suggests it involves posting in support of a government the US doesn’t like – whether Russia, Iran, Venezuela, or China.
The social media Ministry of Truth has become increasingly open about the irrelevance of truth in what constitutes actionable disinformation. One group of “experts” in the spread of disinfo online even published a paper this week explaining that true statements could constitute disinformation if they were arranged to serve a purpose, calling for platforms to expand their definition of “inauthentic behavior” to include anyone reposting information portraying the “good guys” in a negative light.
The Chinese government challenged Twitter to explain its decision to ban state-owned media from advertising, asking “Why is it that China’s official media’s presentation is surely negative or wrong?”
Beijing has pointed to a US role in fanning the flames of unrest, a charge that grows more plausible with every day the protests continue despite having succeeded in forcing the Hong Kong government to withdraw a bill that would have allowed criminal suspects to be extradited to China. Armies of pro-protest tweeters swarm any post by Secretary of State Mike Pompeo with pleas to intervene in their plight, even as US lawmakers threaten to rain down fire and fury should anyone harm a hair on a protester’s head. And photos of the protest leaders meeting with US diplomats suggest there is certainly some “coordinated inauthentic behavior” at play on the other side.
YouTube, as a subsidiary of Google, has been exposed as even more partisan than Twitter’s arbiters of truth. A whistleblower released nearly 1,000 pages of internal documentation earlier this month showing YouTube’s algorithms were aimed more at shaping reality than at accurately portraying it. The platform removed Iranian state media channels as Washington ramped up tensions with Tehran in the Strait of Hormuz, and its deactivation of pro-China channels now suggests the protests - despite achieving their initially stated goal – are far from over.