10 Terms Of Censorspeak Decoded
What You Think It Means: When you hear bureaucrats stress the word “campaign” in talking about their push to stop “disinformation campaigns” or “malign online influence campaigns,” you probably presume campaign means a shadowy network of clandestine political operatives with a hidden reason for why they are colluding to make certain information trend online.
What It Actually Means, In Censorspeak: What “campaign” actually means, in censorspeak, is anyone who posts or shares anything that promotes a political narrative that censorship professionals deem to be misinformation. Merely endorsing or sharing a verboten misinformation narrative, ipso facto, makes you part of a so-called misinformation “campaign.”
Why It Matters: The pretense given by the deceptive use of the word “campaign” allows censorship professionals to make it look like they are guarding the public against a malicious, highly organized covert influence operation — when what they really mean is everyday Americans talking normally with each other online about politically sensitive topics like Covid-19 or election integrity.
Examples: The Department of Homeland Security (DHS) published an online video instructing young people to report their family members to Facebook for “disinformation” if they said that Covid and the flu have similar fatality rates.
The protagonist of the cartoon is a young woman named Susan and the villain is her Uncle Steve, whose thought crime is that he posted on Facebook that “Covid is no worse than the flu.”
DHS begins the video by making it seem like their sole aim is to stop malign “online influence campaigns.”
But Uncle Steve is not a hostile foreign nation state, a covert political operative, or a fraudulent corporate scammer. He’s just Uncle Steve, an ordinary American citizen, posting his opinions about Covid on Facebook.
Another thematic example is DHS’s Office of Inspector General (OIG) August 2022 report, “DHS Needs a Unified Strategy to Counter Disinformation Campaigns.” From the title of report, many might assume DHS is exclusively going after organized “campaigns” in the classical meaning of the term.
But here again, they simply refer to ordinary people expressing opinions online. On page 7, for example, DHS writes:
“[C]ampaigns may aim to erode public trust in our government and the Nation’s critical infrastructure sectors, negatively affect public discourse, or even sway elections. These campaigns can have foreign or domestic origins…Specific examples of recent disinformation campaigns that targeted the United States include… false claims of voter fraud during the November 2020 elections.”
While the title of the DHS report makes it sound like they’re targeting Russian or Chinese intelligence services, or at least a fraudulent scammer of some sort, what they are actually targeting is anyone with an online opinion who scrutinizes mail-in ballots and election integrity issues.
What You Probably Think It Means: Dams, satellites, transportation lines, subsea cables, etc.-essential, physical buildings or structures.
What It Actually Means, In Censorspeak: Anything you say on social media about a sensitive subject. Any topic area that CISA (DHS’s primary censorship bureau, the Cybersecurity and Infrastructure Security Agency) wishes to work with private partners to censor is deemed by this public-private censorship coalition as “critical infrastructure.”
Example: In the CISA video above instructing a young woman to ask Facebook to censor her uncle for disinformation, you may have wondered how a cybersecurity agency at DHS got the power to promote digital censorship of citizen opinions on Covid.
This newfound power grows out of CISA’s highly deceptive misuse of the term “critical infrastructure.”
Since healthcare is also deemed “critical infrastructure,” CISA argues that misinformation online shared by citizens about healthcare is a cyber threat to said critical infrastructure. And because DHS is tasked with defending “critical infrastructure” from threats, they believe that they can step in to neutralize the “cyber threat” of your misinformation opinions.
To drive the point home that “critical infrastructure” in censorspeak means everything and anything, CISA’s top boss Jen Easterly publicly stated that “cognitive infrastructure” – the very thoughts that are in your head – are included in the meaning of critical infrastructure as well.
What You Probably Think It Means: Computer hackers, malware virus threats, hostile foreign nation states implanting spyware into US devices.
What It Actually Means, In Censorspeak: Any citizen posting so-called “misinformation” on social media.
Why It Matters: The pretense of national security state agencies like DHS, CISA, the FBI, the State Department and the Pentagon being allowed to have any involvement in domestic censorship requires there to be some plausible threat to justify their entry.
These government agencies create this pretense by conflating ordinary citizen opinions (“misinformation”) with traditionally recognized national security threats, under the banner of a scary-sounding catch-all term: “cyber threat actor,” which they use as justification to include both groups.
Example: As one example, take DHS’s Oct. 2019 blueprint report, entitled “Combatting Targeted Disinformation Campaigns: A Whole-Of-Society Issue.” This report contemplated DHS establishing a censorship coordinating and pressuring role, ostensibly to stop the spread of domestic “disinformation.”
By now, you know what “campaign” means in censorspeak, in the name of the 2019 DHS disnfo report.
The phrase “threat actor” appears 58 times in the report’s mere 28 pages. At first, one might presume that when DHS says “threat actor” they mean disinformation threats from Russia, China or Iran. But the report makes clear that “threat actors” include simple citizens spreading “false or misleading information online.”
Their characterization of “threat actors” lumps in regular citizens at home speaking freely with hostile foreign nation states, as targets of the US federal government.
In fact, per DHS’s definition of “threat actor” (p. 5 of report), simply “propagating a counternarrative to domestic protests”) falls into the category of “threat actor” activity.
By this, DHS appeared to be singling out conservative online criticism of Black Lives Matter protests as a specific example of a domestic disinformation “threat actors.”
On p.14 of the report, DHS labeled conservative critics of football star and BLM icon Colin Kaepernick as “right-wing” “disinformation threat actors” for posting what appear to be online satirical memes mocking the shoe brand Nike for running an ad campaign featuring Kaepernick.
Why is the Department of Homeland Security in the business of squashing online narratives about a shoe company and a football star? They mission-creeped their way into it by conflating “mis/disinformation” with classical cyber threats — making you a threat in the eyes of DHS for speaking an unauthorized opinion.
Once the phrase “disinformation threat actor” percolated in 2019 and successfully categorized US citizen opinions on social media as a security threat, the term could be used to include Covid critics as “disinformation threat actors” as well.
See, e.g., The Disinformation Outbreak About the Coronavirus Outbreak: What to Make of the False Information Plague?:
A better understanding of the coronavirus disinformation threat actor spectrum and their intended objectives can help determine where counter disinformation efforts would focus today and in the future.
By creating their definition of “threat actors” to include “political & social groups,” the federal government has simplified “influencing audiences” – into anyone who believes in something and wants to spread a message.
What You Probably Think It Means: Malware sent over online communication as a kind of hacking technique.
What It Actually Means, In Censorspeak: True information that censorship professionals can’t disprove but want to censor anyway.
Why It Matters: Censorship professionals began in 2017 with two terms – misinformation and disinformation – to designate benign versus malevolent intent of speech online. But by 2019, they began to realize they couldn’t disprove many of the claims they wanted censored. So they developed a new term called “malinformation,” creating a censorship predicate to take down opinions that are technically correct – but lead listeners to develop an unauthorized opinion.
When CISA coordinated the censorship of the 2020 election, malinformation quickly became the largest category of social media takedowns.
Example: CISA today has, in effect, a formal, permanent domestic censorship office called the “Mis, Dis and Malinformation team.”
Their icon for Malinformation is a bullhorn with the word “facts” on it, but the “facts” it spits out are missiles:
What You Probably Think It Means: Teaching underprivileged children how to read and write.
What It Actually Means, In Censorspeak: If you read the wrong news sources or get your facts from the wrong websites online, you are media illiterate and need to improve your literacy by reading the news and sources that censorship professionals want you to read instead.
Why It Matters: Media literacy (sometimes called digital literacy) is a laundering label for censorship programs at the government, academic and private sector levels across the US. Programs that would not survive if they were called “censorship” programs are shielded under a benign-sounding euphemism that most people associate with philanthropy.
The goal of “media literacy” is to extinguish virtually all news sources that stray from a mainstream range of opinions expressed in legacy media.
Example: “Co-Designing for Trust,” a censorship project that received a $2.6 million grant of taxpayer funds from the National Science Foundation (NSF). Here is their promo video as part of the NSF’s Convergence Accelerator Track F program:
If one simply reads Co-Designing for Trust’s grant page and does not know that “information literacy” is just a censorship predicate (if people read the wrong information online, they are illiterate and must become media literate by ingesting the recommended media sources), one might not realize this is even a censorship project:
NSF CONVERGENCE ACCELERATOR TRACK F: CO-DESIGNING FOR TRUST: REIMAGINING ONLINE INFORMATION LITERACIES…
MISINFORMATION INACCURATE OR MISLEADING INFORMATION HAS EMERGED AS A GROWING THREAT TO AMERICAN DEMOCRACY SINCE IT UNDERMINES CITIZEN TRUST IN PUBLIC INFORMATION AND INSTITUTIONS. IT OFTEN DOES SO BY EXPLOITING PERSONAL BELIEFS, EMOTIONS, AND IDENTITY, THEREBY TRIGGERING RESPONSES THAT EXPAND SOCIAL DIVIDES AND ENCOURAGE INDIVIDUALS TO ACTIVELY RESIST COMPETING CLAIMS. SOLUTIONS MUST NOT ONLY PROVIDE THE PUBLIC WITH SKILLS FOR DETERMINING THE TRUTHFULNESS OF CLAIMS, BUT MUST ALSO PROVIDE RESOURCES FOR ADDRESSING THE SOCIAL AND EMOTIONAL IMPACTS OF MISINFORMATION. THIS REQUIRES A FUNDAMENTAL REIMAGINING OF OUR APPROACH TO DIGITAL LITERACY, SO THAT IT IS BETTER GROUNDED IN THE EVERYDAY REALITIES OF THE COMMUNITIES MOST IMPACTED BY MISINFORMATION. THIS IS PARTICULARLY TRUE FOR UNDERSERVED COMMUNITIES, WHO ARE DISPROPORTIONATELY TARGETED BY MISINFORMATION.
THE PROJECT WILL ADDRESS THIS NEED BY CREATING LOCAL SOLUTIONS ALONGSIDE DIGITAL LITERACY INTERVENTIONISTS, THE COMMUNITY ORGANIZATIONS, LIBRARIANS, TEACHERS, AND OTHERS ALREADY FOCUSED ON PROVIDING FORMAL AND INFORMAL EDUCATION TO ADDRESS MISINFORMATION WITHIN THEIR COMMUNITIES. THE PROJECT WILL BUILD COMMUNITY-ORIENTED INFRASTRUCTURE THAT ENABLES UNDERSERVED COMMUNITIES TO DESIGN, COLLABORATE ON, AND SHARE EDUCATIONAL RESOURCES THAT ADDRESS MISINFORMATION. IT WILL LEVERAGE PARTICIPATORY DESIGN WITH DIGITAL LITERACY INTERVENTIONISTS TO CREATE LOCALLY-CONTEXTUALIZED DIGITAL LITERACY RESOURCES… IT WILL ALSO ADVANCE OUR UNDERSTANDING OF HOW SOCIOCULTURAL CONTEXTS AND KNOWLEDGE SYSTEMS CAN SHAPE DIGITAL LITERACY INTERVENTIONS, SO THAT THESE INTERVENTIONS ARE BETTER ABLE TO MOTIVATE AND SUPPORT DIVERSE COMMUNITIES AS THEY RESIST MISINFORMATION.
What You Probably Think It Means: Believing something you read online is not a scam.
What It Actually Means, In Censorspeak: Believing the specific narratives, institutions, and news sources selected by the censorship industry — and disbelieving competing alternatives.
Why It Matters: “Trust” is a verbal sleight of hand used by censorship professionals to create a false impression that they are simply safeguarding the public against scams, fraud and inauthentic identity schemes on social media. In reality, they are engaged in straightforward censorship using “trust” as a watchword.
The censorship industry was birthed in early 2017 as an organized institutional response to the 2016 US presidential election “going the wrong way” after a populist candidate, Donald Trump, won primarily due to his popularity on social media.
In the eyes of censorship professionals, support for populism and alternative viewpoints is a symptom of a lack of trust in institutions and a lack of trust in mainstream media.
The censorship industry takes the view that if trust cannot be earned, it must be installed.
Rather than allowing institutions and mainstream media to change their ways to earn back the lost trust of ordinary people, censorship professionals use “trust” as a predicate to argue that alternative views or news sources that direct grievances at institutions or mainstream media is either “divisive,” “misinformation,” or “undermines public faith and confidence” – and therefore should be censored for damaging “trust.”
Examples: When Twitter set up its censorship office to ban people for what they tweet, they didn’t name it “Censorship.” They named it “Trust & Safety.”
Under this narrative of “trust,” when a Twitter user gets suspended, it doesn’t sound as much like Twitter is making an immoral or legally dubious decision in silencing the free speech of a US citizen. It sounds like Twitter is protecting the public by removing a threat to the “Trust & Safety” of the community.
Facebook’s censorship team name is called “Trust & Safety.” YouTube’s censorship team is also called “Trust & Safety.” Virtually everywhere you get censored on the Internet; they don’t say they are censoring your opinions and views. They are just doing trust work, you see.
As another example, the Biden Administration’s $40 million domestic censorship superweapon – the NSF’s “Convergence Accelerator Track F” program, is not called Track F: Domestic Censorship. It’s called Track F: Trust & Authenticity.
Track F’s “Trust & Authenticity” premise is that if trust of the American people in institutions and media cannot be earned, it must be installed via censorship of all competing alternatives.
Full playlist here: https://www.youtube.com/watch?v=K_3tnct4zDg&list=PLGhBP1C7iCOlKk7pNWHP6BN7irpwy9z9Y
What You Probably Think It Means: You probably think “resilience” means making people stronger and better able to adapt to real-world problems. When a bureaucrat says it, you probably associate the word “resilience” with harmless corporate jargon.
What It Actually Means, In Censorpeak: When censorship professionals use the word “resilience,” they mean making people resilient against believing political and social narratives they don’t like.
Why It Matters: Censors use “resilience” as a cover word to disguise activities, programs, and even government funding. If they simply called what they were doing “censorship,” public outrage would shut down their operation in a week. By calling it “digital resilience” initiative, the public remains blissfully unaware that “building resilience” means building resilience against their own thoughts and beliefs.
Example: On May 1, 2022, NSF gave a $200,000 grant to George Washington University (see full grant details here) with the following grant description:
PANDEMIC COMMUNICATION IN TIME OF POPULISM: BUILDING RESILIENT MEDIA AND ENSURING EFFECTIVE PANDEMIC COMMUNICATION IN DIVIDED SOCIETIES
This project uses several methods to study how populist politicians distorted Covid-19 pandemic health communication to encourage polarized attitudes and distrust among citizens, thus making them more vulnerable to misinformation….
It also studies how best to counter these populist narratives and develop more effective communication channels. The research studies four areas of communication: government-led pandemic communication, media policy, media coverage, and public attitudes towards the media. The project makes an important contribution to research on populist communication and political polarization by bringing two fields of expertise — populist communication and public health — together.
The project will also study how best to counter these populist narratives and develop more efficient and reliable communication. The focus is on four countries — Brazil, Poland, Serbia, and the US — all led by populist leaders during the pandemic and capture different types of populist responses to the pandemic… This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the foundation’s intellectual merit and broader impacts review criteria.
If you just read the title of the grant, you’d think “Building resilient media? Sounds good, sure, media should be more resilient!” It would not be until you see the fine print that you’d realize their plan to make media more “resilient” is to censor social media narratives promoted by populist news sources or political leaders.
What You Probably Think It Means: Dragging a physical object across a bumpy or uneven rug.
What It Actually Means, In Censorpeak: Applying a range of censorship techniques to a social media post or a person’s personal account to make it more difficult for followers or other people to access or share their post or see the account. The same way adding more “friction” to a surface makes it more difficult for objects to move on it.
Why It Matters: Censors try very hard to avoid the “martyr effect,” which is when censorship backfires by creating public martyrs out of the victims of censorship. Censorship professionals have developed a range of “friction” techniques so that censored people are censored in covert and nuanced ways that mitigate the spread of content and hide or force click-throughs to view or share content.
Example: When Twitter targeted the popular news website ZeroHedge for posting a viral news article about Covid-19 origins, censorship professionals knew it would cause a “martyr effect” if ZeroHedge was outright banned from the platform just for this news story. So, Twitter applied a “friction” technique, forcing users who wanted to click on a ZeroHedge URL link on twitter to be re-directed to a clickthrough page that made it look like zerohedge.com was a malware site that would infect your computer if you read the article:
Only if you squinted and read the very last bullet point item – that the Twitter thinks the news story you’re about to read might violate Twitter’s censorship policies on challenging Covid orthodoxies – would you have any idea that this is just a form of censorship via friction.
What You Probably Think It Means: Saving someone from harming themselves.
What It Actually Means, In Censorspeak: Censoring your social media post by applying a range of techniques, spanning from total account banning to “friction” techniques such as shadowbanning, search banning, recommendation banning, deboosting/deamplifying, demonetizing, link demotion, fact-check labels, or clickthrough interstitials.
Why It Matters: If censorship professionals said, “we’re censoring you,” or “we’re censoring millions of US citizens every day,” and the public knew the extent of it, there would be public outrage. So verbiage used by the censorship industry has increasingly leaned into philanthropic sounding tones like “intervention.” We’re not censoring people, you see, we’re saving people – from themselves!
Example: Here’s a technical censorship industry white paper trying to advance the science of censorship in Nature, arguably the most prestigious scientific journal in the world. It’s called “Combining interventions to reduce the spread of viral misinformation.”
Most people would see the article start with “Combining interventions” and think “ah, okay, so maybe people are harming themselves or others – maybe censorship is justified in this context.” Not realizing that “intervention” just means censorship professionals applying their own censorship techniques.
The article recommends a combination of total account bans for some Covid skeptics, applying friction to the accounts of other Covid skeptics, and applying redirection to authorized opinions -characterized only as the range of “interventions” that work best for pushing the public to believe what censorship professionals want them to believe about Covid.
What You Probably Think It Means: Lofty ivory tower organizations that have been around for a hundred years, doing esoteric lectures on ethics and morality.
What It Actually Means, In Censorspeak: Politically like-minded allies and organizations in (1) government, (2) the private sector, (3) civil society (academia, NGOs, and activists), and (4) news media and fact-checking.
Why It Matters: “Institutions” is the watchword censorship professionals use to describe whose interests are threatened by online narratives, demanding the censoring of your account or your social media posts.
Institutions are said, by censorship industry professionals, to be essential to democracy.
Under this narrative, if you criticize the media, for example, you are attacking democracy, because you have attacked the institution of journalism.
If you raise grievances against the federal government’s response to Covid-19, you are “undermining public faith or trust” in public health institutions, or scientific institutions.
Challenging institutions is thus, ipso facto, an attack on democracy, and therefore a predicate for censorship.
Example: NSF gave a $5 million grant – of your taxpayer dollars – to a censorship project called Course Correct.
In the video below, Course Correct explains its “Precision Guidance Against Misinformation” censorship offering:
There are good journalists and bad journalists, and some of us make mistakes. That happens. So, that’s a credibility issue. But when we have that already, mistrust of the public about what we do, when you have them going to alternative sources that are misleading them, sometimes on purpose, it is not good for us, in general, because we’re getting bad information.
Without a common set of facts to move from, it’s very difficult for us to solve the biggest problems that we have as a society. Course Correct is trying to nudge us into the direction of understanding and agreeing upon the verifiable truth for the foundational issues that we need to sort through as a society in order to solve the big problems that are currently vexing us.
We are building the core machine learning data science and artificial intelligence technology to identify misinformation, using logistics, network science, and temporal behavior, so that we can very accurately identify what misinformation, where misinformation is spreading, who is consuming the misinformation, and what is the reach of the misinformation.
The misinformation is coming from a separate part of the country or, you know, it is people with a certain perspective, a political view, who are sharing a certain misinformation. You want to be able to tailor the correction based on that because otherwise there’s a lot of research that says that actually corrections don’t help if you’re not able to adjust or tailor it to the person’s context.
And Course Correct has pioneered experimental evidence showing that the strategic placement of corrective information in social media networks can reduce misinformation flow. So, the experiments we are running are able to help us understand which interventions will work. And so by testing these different strategies at the same time, Course Correct can tell journalists the most effective ways to correct misinformation in the actual networks where the misinformation is doing the most damage.
How does NSF justify taking $5 million out of taxpayer pockets and using it to censor taxpayers who have “mistrust” of mainstream media? Because media is an institution, they say, and therefore must be protected against online criticism in order to save democracy, because institutions are said to be the lynchpin of a democratic society.
The result? Your own government is taking your money to censor you to benefit themselves and their politically like-minded friends in the media.
To DiResta’s already controversial censorship industry career arc (detailed extensively here), it appears one more peculiar factoid can be added: Renée DiResta “worked for the CIA” before being recruited to perhaps the most powerful domestic censorship coordinating center in all of academia, with close ties to Big Government and Big Tech: the Stanford Internet Observatory.
That DiResta “worked for the CIA” before her Stanford disinfo role are the words of the man who recruited DiResta to lead disinfo operations at the Stanford Internet Observatory: its director Alex Stamos.
The video below, clipped from a June 19, 2019 livestream (see 18:02 timestamp), is currently available for public viewing on the Stanford Internet Observatory YouTube channel. It shows Alex Stamos in June 2019 – the month the Stanford disinfo center opened – touting Renée DiResta’s employment history, boasting that DiResta had gone from a lowly “part of the academic unwashed with a… degree from a public university” all the way up to having “worked for the CIA”: