Deletion policy

From Lesswrongwiki
Revision as of 12:32, 26 December 2012 by Eliezer Yudkowsky (talk | contribs) (Created page with "This is a summary of what LessWrong posts or content may potentially be deleted by administrators and moderators, with some explanation of why. Background theory: Although i...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This is a summary of what LessWrong posts or content may potentially be deleted by administrators and moderators, with some explanation of why.

Background theory: Although in many cases the restrictions are not obvious, in almost any given forum speech is not free. An academic conference may pride itself on vigorously protecting the right of an undergraduate to ask difficult questions during a professor's Q&A, and yet even this act potentially exposes the undergrad to loss of status if the question is stupid. More importantly, the conference probably requires a fee to attend - the vaunted 'free speech' doesn't include to allowing mere mortals to wander in off the street and talk. And anyone who started talking during someone else's presentation, or swearing into the microphone during Q&A, might be quickly removed - the apparently free plains are really carefully walled gardens. Even 4chan, which in many ways provides a much better picture of what real 'free speech' would look like, still has administrators who delete spam. In the era of the Internet, your attention is a commodity; everyone wants it, and some people are willing to outright steal it. Everyone might enjoy, perhaps, if they themselves could speak with complete freedom in every forum - the way you would like to be able to go on national television any time you wanted. In this sense, anyone can be expected to object when they want to say something to an audience, and are somehow prevented from doing so. But it wouldn't actually be possible to run radio, television, or the Internet, and your cellphone would be inundated with spam calls, if 'free speech' meant everyone could talk in any forum at any time. The real danger is when an entity with sufficient power steps in to prevent a sentence from being spoken anywhere, by anyone, to anyone. But Internet message boards are not all-powerful entities, and alternatives everywhere exist (like, you know, 4chan); and the fact that not everyone is allowed to say anything at any time, even if those individuals themselves might prefer to be able to say anything at any time in any communications medium anywhere, does not constitute a terrible threat to civil liberties.

Most of the burden of moderation on LessWrong is carried by upvotes and downvotes - comments the community doesn't want to hear, will be downvoted. We encourage you to downvote any comment that you'd rather not see more of - please don't feel that this requires being able to give an elaborate justification.

Even so, administrators and moderators may sometimes entirely delete comments in cases such as the following.

1) Prolific trolls can post enough comments that users become fatigued of downvoting them. Once a commenter has been downvoted sufficiently strongly sufficiently many times, an administrator or moderator may go through and delete other comments at their whim, whether or not those comments have also been downvoted.

2) Trolls thrive on attention. A few users who are unable to restrain themselves from feeding trolls, represent a threat to the community because they provide troll-food and make LW a more attractive place for trolls. People who reply to trolls, even if their comments are amazingly intelligent, even if their comments are upvoted, may find their comments deleted. We strongly recommend downvoting 'all' replies to troll comments. A 5-point karma toll applies if you try to reply anywhere inside a thread beneath a sufficiently downvoted comment. Sufficiently downvoted comments are not expanded by default. Replies to descendants of sufficiently downvoted comments do not appear in Recent Comments. (This used to be a large problem. Can you tell?)

3) Spam is deleted outright to prevent it from being used to serve a useful SEO purpose for spammers.

The following reasons for deletion are somewhat more controversial. Please keep in mind that this board is owned and run by organizations which consider it to be their property, and that our trying to explain what may get deleted and why, does not mean that these decisions are subject to public vote.

4) Posts which reward outside trolls with attention.

It's worth noting that the entire phenomenon of terrorism exists simply because the media rewards terrorists with media attention - a larger-scale analogue of feeding trolls. If we lived in a world where nobody knew anything about the group that felled the World Trade Center - not who they were, their names to be made famous, or their goals to receive publicity - then the World Trade Center would not have been felled in the first place. Aliens watching the whole affair might have justly concluded that Earth's public media were the ones responsible for most terrorism, since they are the key component in a symbiotic relationship whereby terrorists commit deeds that sell lots of newspapers, and are rewarded with publicity for their causes. Of course no actual conspiracy is necessary for this situation to obtain - it is merely a Nash equilibrium of the private incentives for media and for terrorists.

The point remains that rational agents would not reward people who do stupid things or say stupid things with attention, publicity, or as the case may be, links which contribute LessWrong's Google PageRank to their websites.

5) Posts or comments purporting to discuss 'hypothetical' violence against identifiable real people or groups, or 'ask' whether that violence is a good idea, may be deleted by administrators or moderators.

This means that if you post an elaborate post exploring the potential great reasons to kill Brittany Fleegelburger or burglarize the homes of people with purple eye color (note that this page is being written so as not to identify any individuals as targets of violence), the post or comment, along with replies to it containing sufficient information to identify such an individual, may be deleted without ado.

In general, grownups in real life tend to walk through a lot of other available alternatives before resorting to violence. To paraphrase Isaac Asimov, having your discussion jump straight to violence as a solution to any given problem, is a strong sign of incompetence - a form of juvenile locker-room talk. (We emphasize to the casual reader that this situation has arisen very rarely over a multi-year lifetime of a message board with thousands of commenters; by far the vast majority of LW commeters appreciate this point.) Pleading, "But how can we gather information if we can't talk about it?" is not an excuse considering the aforementioned point and that the act of talking about violence against identifiable people can cause them to feel justly threatened, and justly complain. Would you like it if there were lots of 'hypothetical' scenarios being talked about in which it was a great idea to shoot you? No, because you wouldn't want that idea on the minds of thousands of people at least one of whom might be crazy. This is why - in addition to obvious legal and public-image issues - we think it is actually harmful to jump straight to violence as a solution, and then talk about it. People who really cared about an issue would realize the obvious knock-on negative effects of both violence, and the negative effects on the image of that issue of discussion that dwells on violence, and talk about the many other alternatives instead.

This similarly applies to someone who, in reply to a case that e.g. a substance called congohelium is harming the environment, by saying, "Ah, but if you go around saying that congohelium harms the environment, then someone may assassinate congohelium manufacturers! How terrible!" This is a form of fallacy of appeal to consequences - someone trying to make the thesis "Congohelium harms the environment" look bad by hypothesizing that if this idea is true and believed, some crazy person might go assassinate congohelium manufacturers, which is icky violence. But in this case, the only person who raised and publicly gave publicity to the idea of violence against congohelium manufacturers, was the person trying to make the idea look bad - in fact, they're demonstrating that they're willing to throw congohelium manufacturers under a truck, for the sake of winning this online argument. So if you're the first one to allege that some idea you don't like implies violence (as the first and only alternative) toward identifiable targets, do not be too surprised if this comment itself is deleted.

6) Information hazards.

When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy. (Remember! It's not idiocy, it's incentives!) Another example of an information hazard might be, e.g., if you look on Wikipedia on the entries of people rumored to be major players in the Russian mafia, you will see no mention of their putative criminal activities. This is because, among other reasons, the people who run Wikipedia do not want to actually really get assassinated. We would delete such information if posted, even though it was both true and important, because it is not (for unusual reasons, not mental health reasons) physically healthy to know it. Actual information hazards are pretty darned hard to invent, but rest assured that if you post or comment something that, if true, could [only harm the reader in some odd fashion], we reserve the right to delete it.

7) Some forms of discussion of criminal activities.

We live in a society with many stupid laws (such as the US's ban on marijuana) which are not actually enforced, or are selectively enforced not against middle-class people (such as the US's ban on marijuana). There is not a blanket injunction against discussing things on LessWrong that happen to be illegal in one or more jurisdictions. On the other hand, some things are illegal for reasonably good reasons. And also on the other hand, if there actually is good reason to break a law, then by publicly discussing your lawbreaking on the Internet, you fail forever as a master criminal. In general, there's the following twin test to apply to discussing usually-illegal activities on LW: "Is it true that whether this was a good idea or a bad idea, it would in either case probably be a bad idea to discuss it on LessWrong?" In the case where something is actually a bad idea, discussing it may waste people's time, cause unfavorable publicity, give a tiny fraction of the population the impetus to do something stupid, etcetera; and in the case where something is a good idea, discussing your intended crime on the Internet is still stupid. Please think twice about both sides of this question before discussing something illegal.

8) Topics we have asked people to stop discussing.

LessWrong is focused on rationality and will remain focused on rationality. There's a good deal of side conversation which goes on and this is usually harmless. Nonetheless, if we ask people to stop discussing some other topic instead of rationality, and they go on discussing it anyway, we may enforce this by deleting posts, comments, or comment trees.

All of the above guidelines are not binding on moderators or administrators. We aspire to have large amounts of common sense and are not forced by this wiki page to delete anything.