User:Woozle/metapolitics

From LessWrong
Jump to navigation Jump to search

Definition

The word metapolitics does not seem to have been co-opted for use within LessWrong, so I am co-opting it to mean a discussion about "politics" from outside rather than from within.

It has been suggested that the word "metapolitics" might refer to discussion of ideologies and "isms", so long as those discussions don't delve into actual politics. That isn't what I want to talk about, so I'm going to dub that sort of discussion "meta-ideology" or possibly "ideological metapolitics" in case I need to refer to it.

What I do want to talk about is:

"Politics"

Just as a calibration against common usage, the dictionary definition of "politics" can be summarized thusly:

  1. governance
  2. the details of politics ("actions, practices, or policies")
  3. power struggles
  4. political views of a person
  5. interactions within a society

There seems to be a view here at LessWrong that the basic meaning of the word "politics" is #3, and that all the other definitions except #1 refer back to that usage. (#1 doesn't seem to be included in LessWrongian usage except perhaps in the sense that "pearl" includes "grain of sand".)

This view seems to be based on the idea that people have conflicting terminal values [1] which cannot be further resolved (as exemplified by the babyeaters sequence) -- which is in turn based on the idea that there is no universal ethical principle against which all moral actions can be measured.

I disagree with both of those assertions; it seems clear to me that people often do accept the premise that personal beliefs are only justifiable in terms of a larger good; they only fall back to the "identity" argument when cornered, i.e. when it becomes obvious that their justification doesn't work.

So, in summary, I claim:

  • People believe that their morals are based on some larger good.
  • Although people can't always articulate what this good is, there is agreement about what is "good" and "bad" at a much deeper level than many stated terminal values
  • Honest rationalists will update their beliefs so as to maximize this larger good1, regardless of whether they had built an "identity" around their existing beliefs.
  • Although we do not yet know enough about the mind to write an algorithm for "right" and "wrong", it should be possible to create a "badness"-minimizing algorithm using humans as black-box morality-measuring processes. (It may, in fact, depend heavily on experience -- the more we share data, the more we will agree on right and wrong, but as yet we lack the technological capability to dump memories from one brain to another, so we have to settle for a proxy.)

I would also add that anyone who refuses to even try to demonstrate how their moral system works towards the "greater good" is, by default, relinquishing their interest in being involved in a conversation to determine whether their beliefs serve that good or not. The "identity" argument -- refusal to defend a pseudo-terminal value -- does not provide amnesty from criticism.

The "identity" argument is a form of hiding from the truth, and a fake explanation (I believe X because it's who I am) and curiosity stopper (curiosity about the reasons for my belief will destroy my identity!).

I have been told I need to read the metaethics sequence before I can argue further, so I am doing that. --Woozle 23:52, 17 September 2010 (UTC)

Footnotes

1. There's some further discussion to be had about whether serving the "greater good" is a form of serving one's own interests or whether it is innate, but that's a further refinement which only makes sense if you accept the basic claim that there is a "common good".

Progress Notes

I accept the idea that there is no argument so compelling that no mind will reject it. I suggest that there are arguments which no rational ethical mind would reject, where "ethical" refers to a subspace of AI design space which is rigorously definable but which we do not yet have sufficient knowledge to define. (Perhaps "ethical" can be further broken down into "rational"+"compassionate".) I suspect, however, that there is no shortcut to determining which arguments would be universally compelling: you have to actually test them on all known rational minds (which is itself a shortcut for testing predictions about reality).

Being rational minds ourselves, however, we can determine (to an arbitrary degree of reliability) whether any given mind is functioning rationally in any given case by examining its inputs, logical processes, and outputs (to an arbitrary degree of thoroughness). This is how, for example, scientific peer-review works -- and why the results produced by science are so much more reliable than those produced by politics (as it now stands).


The idea that politics ultimately boils down to battles between conflicting ideologies (or "identities") -- that there is generally no solution which benefits both sides (the "common good") -- is zero-sum thinking.


I believe I am capable of making rational/ethical decisions, and I believe that I can, with sufficient questioning, determine whether or not others are being rational/ethical -- and that my own rationality/ethicality can be determined by the same method. The trick is coming up with a set of rules by which this can be done -- hence rationality detection, which I think can also be applied to the question of whether an argument is "ethical".

The obvious objection, I think, would be: what if some sizable group of people rejects whatever set of rules we come up with? How can we resolve the dispute if we can't agree on the rules?

I would suggest: our rules are self-consistent. If they can come up with a similar set of rules that are self-consistent, then we can see if there is any compatibility (e.g. if a better set of rules can be constructed using the best pieces of the two existing sets). If we cannot, then it may be necessary to accept that our worldviews are incompatible, and bump down the ladder of negotiation to the next level, if their worldview at least respects the concept of honesty -- and if it doesn't, then we must bump further down the ladder.


New thought (just from reading over this): Optimizing for "minimum badness" also optimizes for "goodness". The classic counter-scenarios in optimizing for minimum badness are:

  • a faux utopia where everything is uniform and people are "happy" but all creativity is squelched.
    • counter: lack of creativity (or freedom to create/express) is a badness, and needs to be taken into consideration -- so the algorithm would not consider a faux utopia a successful outcome, and work would continue to try to improve it (assuming we somehow got there in the first place).
  • kill every living thing in the universe painlessly right now. No pain = no badness. The absence of pain from this point forward is surely essentially infinite, and thus ought to outweigh any other considerations.
    • counter: The absence of those living things from this point forward is a badness, and just as infinite as the absence of pain. (You would have to show that the net amount of pain in the universe outweighs the goodness of life in the universe. This question may not be amenable to objective debate -- but if it's what you believe, then why not kill yourself and get out of the discussion? Or, more rationally, what is the basis of your belief?)