In the last month or so, and particularly as the American presidential election enters its most heated stretch, the generally poor quality of online discourse has received a lot of meta-commentary from those people and organizations whose articles are subjected to the worst bile. On the Media, an NPR show (and podcast) of media commentary and criticism, did a piece on it, and more recently The Atlantic Monthly blogger Megan McArdle found a fellow traveller in Tim Burke:
Most of the time, it seems to me that trying to write anything more reflective, more ambiguous, more exploratory in a blog is either going to bore an audience that’s come seeking their Two-Minute Hate or it’s just going to be willfully misconstrued by someone else who needs fresh meat for their own hounds to feed upon. Read the comments section at Inside Higher Education, for one example. There’s no point to trying to talk about nuance or complexity or what makes for a good research design or anything else in that kind of back-and-forth.
What I think a lot of people tend to miss in this discussion about online comments is that websites that specialize in syndication and commentary, like Slashdot and Digg, tend to do a much better job providing a decent signal-to-noise ratio (yes, I said Slashdot) than most blog and article commenting systems do. Most of the time, as with my own blog, the comments feature is essentially an afterthought: a simple list of text comments, along with a timestamp and usually some form of commenter identification. Every comment is given equal weight and exposure, and if there’s any moderation at all it’s usually manual, and enforced by the blogger themselves.
This is fine on small blogs, but once you grow to a certain size it quickly becomes unwieldy. It’s well-understood that online anonymity can breed anti-social behavior, but not many people realize that software can, to a limited (but never full) degree, help to moderate such discussions.
How It Works
Let’s take Slashdot as our example. Here’s an excerpt from the FAQ entry on their moderation system:
When moderators are given access, they are given a number of points of influence to play with. Each comment they moderate deducts a point. When they run out of points, they are done serving until next time it is their turn.
Moderation takes place by selecting an adjective from a drop down list that appears next to comments containing descriptive words like “Flamebait” or “Informative.” Bad words will reduce the comment’s score by a single point, and good words increase a comment’s score by a single point. All comments are scored on an absolute scale from -1 to 5. Logged-in users start at 1 (although this can vary from 0 to 2 based on their karma) and anonymous users start at 0.
Moderation points expire after 3 days if they are left unused. You then go back into the pool and might someday be given access again.
That entry was last modified on 6/19/00, more than eight years ago. The technology isn’t new, although Slashdot (and more recently Digg) have spent a lot of time tweaking it. This is their competitive edge: it’s not the syndication, it’s the filter. Visitors to the site start with a high comment threshold by default, and they aren’t shown any poorly-rated comments (or articles) unless they themselves lower that threshold. And because human beings on the Internet are, generally speaking, much like human beings everywhere else, anonymous, poorly-written, one-sided, uninformative, and/or abusive comments tend to get rated (or “modded”) down. Good comments are modded up. A pretty decent conversation ensues.
A lot of infrastructure goes into making this successful. You need to allow users to log in, to track their moderation points (“karma” in Slashdot lingo) and, most importantly, you need to spend a lot of time securing the system against the inevitable onslaught of trolls. It’s hard to do, and well beyond the scope of most blogs. Even large blogs don’t have a consistent enough user community to justify the kind of effort that goes into pruning one, although I’d be surprised if some enterprising soul hasn’t tried to build one around Gravatar or a similar system (though my extremely casual Googling didn’t reveal anything).
Good Conversationalists Self-Moderate
Of course, not all successful conversation systems have been technologically policed. The most prominent example is Usenet, for years the authoritative way to engage in group discussion on the Internet. But it took Usenet years to evolve the powerful etiquette and culture that underpins the community (‘community’ itself being an extremely loose term in this context), and educating newcomers has always been difficult. It’s no coincidence that Godwin’s Law, perhaps the most famous Internet adage on heated conversation, was born there way back in 1990.
So none of this is new, and it was going on years before the New York Times deigned to dip its toe ever so gingerly in the pool. But the software that’s been deployed simply isn’t very good at regulating the quality of those conversations yet, and can’t begin to compete with web sites that specialize in it. Until developers and authors recognize the need for discussion systems that encourage conversations rather than chatter, and that involve their readers in the process, expect not the mud to cease its winge’d flight.