Facebook and other social media are censoring not only politically controversial speech; they are also censoring cartoons, jokes, irony and other forms of humor.
This is not because the people who run the social media necessarily lack a sense of humor or irony, but because they have delegated the role of sensor to robots: algorithms, computer software, and other forms of non-human decision-making.
It turns out, however, that these robot’s, brilliant as they are at playing chess and identifying potential terrorists, can’t tell the difference between advocacy of violence and mocking such advocacy.
Nor can they tell the difference between hate speech and humorous ethnic jokes that employ benign stereotypes (as almost all ethnic humor does).
Although the humans who program the robots eventually permitted some of the cartoons to go on line, by the time they did so, the contemporaneous impact was lost.
With censorship as with humor, timing is everything.
Employing robots to censor is a natural extension of human censorship.
So much content passes through social media every minute that human censorship is nearly impossible as a first line defense against prohibited speech.
Nor is it likely that robots will soon be programmed so as to be able to identify humor, irony and benign stereotyping.
More likely robots will be given more and more censorial tasks by social media platforms.
Facebook recently censored several political cartoons based on the inability of their robots to distinguish satire from offensive speech. Other social media platforms are doing the same.
The New York Times featured this new censorship under the headline: "For political cartoonists, the irony was that Facebook didn’t recognize irony."
The story included several of the censored cartoons and jokes which seemed inoffensive to reasonable humans but that apparently set off alarms among the entirely non-reasonable (or non-reasoning) non-human censors.
The other side of the coin is that some really offensive and/or dangerous material evades robot censors, because humans have figured out how the algorithms work and how to circumvent their censorship.
So the end result is that robots both over censor and under censor.
In the terms used by scientists, they produce both false positives and false negatives.
That would be true of human censors as well as robots, but human censors are less likely to mistake humor for deliberately hateful or otherwise dangerous speech.
They are also less likely to be circumvented by clever human attempts to use euphemism or circumlocution to fool the censor.
The problem with humans is that we are too damn slow and to limited in our captivity to monitor billions of messages.
So we are stuck with robots as the first line censors. The one thing we can teach them is to err on the side of free speech and against censorship.
We can program them to accept the principle of "when in doubt, let it out."
We should also "program" human censors — in universities, corporations, media and life — to err against censorship.
It's not only robots that lack a sense of humor.
Extremists of every stripe refuse to laugh at themselves.
At the risk of offending, let me repeat an old joke that stereotypes: "How many radical feminists does it take to change a light bulb?"
The answer: "That’s not funny." Actually it is funny, and insightful. And yes it stereotypes.
That stereotype reminds me of a class I taught in which I mentioned that in Canada, affirmative action applied only to "visible minorities."
A student asked whether Jews were a visible minority?
I replied, "No. We are an audible minority."
A number of students took offense at my stereotyping Jews, but most laughed at what they regarded as self-deprecating humor.
Many standup comedians refuse these days to perform on university campuses, for fear of being accused of sexism, racism, homophobia and other sins.
And this is without robot censors.
The sad truth is that the robots with no sense of humor are probably censoring less than humans who drown their sense of humor in a sea of zealotry.
Follow Alan Dershowitz on Twitter: @AlanDersh
Follow Alan Dershowtiz on Facebook: @AlanMDershowitz
Listen to Alan Dershowitz – Weekdays on his "Dershow" podcast.
New podcast: The Dershow, on Spotify, YouTube and iTunes
Alan M. Dershowitz is the Felix Frankfurter Professor of Law Emeritus at Harvard Law School and author of "Guilt by Accusation" and "The Case Against the Democratic House Impeaching Trump." Read Alan Dershowitz's Reports — More Here.
© 2021 Newsmax. All rights reserved.