O'Reilly: "Mechanism is better than policy"
So rather than a blogger's code of standards, perhaps what I ought to be calling for is moderation systems integrated with the major blogging platforms.
John at librarything wrote:
"One technical suggestion, employed by my employer: letting users flag inappropriate comments, which then become click-to-see. This lowers the visibility of the trolls, without censoring them. For an example, see this thread:http://www.librarything.com/talktopic.php?topic=8702
Message 5 is no longer immediately visible, because it was flagged by a certain number of users as inappropriate. But it can still be seen, if you want to, by clicking on the 'show' link. It's a compromise, but perhaps a practical one.
Similarly, it might help the situation to let users configure whether or not they want to see flagged content, and set the default for flagged content to some sort of reduced visibility.
I really like this, as it addresses one of the biggest hesitations I personally have about deleting comments, namely that deleting part of a conversation can make it impossible to reconstruct what really went on. And there have also been problems in the past with blog owners selectively editing conversations to present themselves in the best possible light. A mechanism that preserves comments while hiding them "in the back room" so to speak would seem to me to be a really useful tool.
I'm in complete agreement with Tim, and have been a believer in this idea for a very long time. Unfortunately, it is unlikely that implementing this idea on a large scale will reduce criticism, as demonstrated by the continuing attacks on AutoAdmit despite that site having an off-topic filter that by default hides 99+% of offensive content.
Labels: Anthony Ciolli, blogging
2 Comments:
While this "hides" the offensive content from those who are there only to read about admissions, it does nothing for those who might want to participate in non-admissions conversations that aren't offensive (e.g., "did you like Casino Royale?"). It also does nothing about the Google problem in which offensive content that names individuals can be found by someone who never goes to AutoAdmit's front page at all. (For solutions that might fix the Google problem without entailing deletion of messages, see my latest comment on the De Novo post about AutoAdmit.)
Don't make me laugh.
AutoAdmit's "off-topic" flag isn't cunning. It's not even close to a beta-version of the kind of reputation-management system that comes out of the box with Scoop and is implemented so well at SlashDot. If AutoAdmit actually had a reasonable system, as opposed to the "off-topic" flag, it wouldn't get half this heat.
Here's a hint: tie the "off-topic" flag to an entry in your Robots.txt file, such that "off-topic" entries aren't ever indexed. ("No Follow" tags work wonders for this as well.) If you'd done that, the Washington Post article would never have been published, because Google would never have picked up the names of any young ladies who your trolls decided to victimize.
AutoAdmit has nothing remotely approaching the kind of technology under discussion by O'Reilly. If it did, you would almost certainly have avoided a lot of trouble.
Post a Comment
<< Home