Back to LessWrong

Chat Logs/2010-02-18

From Lesswrongwiki

Jump to: navigation, search

The following is a mostly-complete, unedited log of the Less Wrong IRC Chatroom from February 18, 2010. The conversation was about How Much Evidence Does It Take? and the meetup thread was created by Byrnema.

See Chat Logs for a list of other chat logs.

[20:15] <jimrandomh> Hello
[20:15] <Byrnema> I'm going to post one more announcement on LW.
[20:15] <MrHen> arundelo was here earlier and will probably be back soon
[20:15] <MrHen> okay
[20:15] <_kevin___> Maybe not tonight, next one do as a top level post
[20:15] <jimrandomh> What time was published as the start time?
[20:15] <MrHen> 8:15
[20:16] <MrHen> CST
[20:16] <jimrandomh> That's now
[20:16] <_kevin___> i think eliezer will promote it and it'll stay at 1 or 2 or
3 points, like a meetup thread
[20:16] <MrHen> Yep
[20:16] <MrHen> If it goes well tonight that sounds good
[20:17] <Byrnema> So I see we have 15 or so people here, great!
[20:17] == arundelo [~adb@xxxxxxxxxxxxxx] has joined #lesswrong
[20:17] <MrHen> most of these just sit in the channel
[20:17] <MrHen> all the time
[20:17] <MrHen> :)
[20:17] <jimrandomh> Hmm, I should connect our previous discussion with the
stated topic of this session, which is evidence, and recap a bit
[20:17] <MrHen> haha
[20:17] <MrHen> maybe later
[20:18] <MrHen> okay, I think this is everyone who said they would be here
[20:18] <MrHen> Alicorn was on earlier but didn't say if she'd be back
[20:19] <Byrnema> To begin, I thought we might begin with a question about the
post or try just summarizing it. Did you have any ideas on beginning?
[20:19] <MrHen> http://lesswrong.com/lw/jn/how_much_evidence_does_it_take/
[20:19] <MrHen> for those who don't know what we are talking about
[20:20] <MrHen> Hmm... questions sound good to me
[20:20] <MrHen> This post in particular wasn't terribly hard to understand, but
the application is what is driving me crazy
[20:20] == dclayh [~4849ee20@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined
#lesswrong
[20:21] <MrHen> I did a little math while I was waiting
[20:21] <MrHen> and wouldn't mind double-checking that I have a few things
correct
[20:21] <MrHen> The example in the post is using boxes that have a 100% beep
ratio for winning tickets
[20:21] <MrHen> And 25% beep ratio for losing tickets
[20:22] <MrHen> And later this is described as providing 2 math bits of
information
[20:22] <MrHen> Which makes sense: 25% = 1/4
[20:22] == komponisto [~ad1ae167@gateway/web/freenode/x-kmseqctnsogqtfie] has
joined #lesswrong
[20:22] <MrHen> Which is 2^(-2)
[20:22] <Byrnema> Right (1 math bit would be discriminating with 50% accuracy)
[20:23] <MrHen> So how much information would you get if the situation was 80%
beep chance for winning tickets and 40% beep chance for losing tickets?
[20:23] <Byrnema> Sounds complicated.
[20:24] <MrHen> As best as I can tell, the answer is 2 bits
[20:24] <MrHen> And here is the math:
[20:24] <Byrnema> No, it would have to be less, right?
[20:24] <MrHen> Assume 9 losing tickets, 1 winning ticket
[20:24] <MrHen> we have 9:1 odds
[20:25] <MrHen> we will beep on 3.6 losing tickets and .8 wining tickets
[20:26] <MrHen> which gives us 3.6:0.8 odds
[20:26] <MrHen> which is the exact same as 4.5:1 odds
[20:27] <MrHen> oh, wait, you're right
[20:27] <MrHen> it is 1 bit
[20:27] <MrHen> it's the same as what happens when there is a 50% chance to beep
on losing
[20:27] <MrHen> and it 100% on winning
[20:27] <Byrnema> about, but, yes, that looks like a clever way of generalizing
a bit even to the case when your accuracy isn't 100% with identifying winning
tickets
[20:28] <MrHen> right
[20:28] <MrHen> So, when looking for bits, is it *always* okay to simply the
likelihood ratio so 1 is on a side?
[20:29] <jimrandomh> It's exactly the same as if you took a 100%/50% box, and
lost 20% of the tickets, both winning and losing. Since you had a large number
of indistinguishable (maximum win chance) tickets to begin with, this doesn't
change anything
[20:29] <MrHen> yeah, exactly
[20:29] <jimrandomh> which is why you can scale the two probabilities by a
multiplicative constant without changing the amount of evidence
[20:29] <MrHen> I just wanted to make sure that there wasn't some secret end
case that messes with this
[20:30] <MrHen> So, cool, that makes sense to me
[20:31] <arundelo> Could you phrase the case of the "accuracy isn't 100% with
identifying winning tickets" in terms of the following quote from the article?
[20:31] <arundelo> "[...] then I have transmitted three bits of information to
you, because I informed you of an outcome whose probability was 1/8."
[20:32] <MrHen> Processing...
[20:32] <Byrnema> In the article, it was assumed that the box always beeped for
the winning lottery numbers. However, in real life, tests sometimes miss a
winners.
[20:33] <arundelo> In other words: what is the "outcome whose probability is
1/2" (1 bit) in "we will beep on 3.6 losing tickets and .8 wining tickets"?
[20:33] <MrHen> I am trying to find where you got the first quote
[20:33] <MrHen> Oh, okay
[20:33] <MrHen> If a box beeps with 80% accuracy on winning tickets and 20%
accuracy on losing tickets, each box gives us 1 bit of information
[20:34] <Byrnema> Some terminology from breast cancer screening: "The
specificity of mammography is the likelihood of the test being normal when
cancer is absent, whereas the false-positive rate is the likelihood of the test
being abnormal when cancer is absent. If specificity is low, many false-positive
examinations result in unnecessary follow-up examinations and procedures. "
[20:34] <MrHen> Err, 2 bits
[20:35] <MrHen> I keep misspeaking
[20:35] <MrHen> Trying again: If a box beeps with 80% accuracy on winning
tickets and 20% accuracy on losing tickets, each box gives us 2 bits of
information
[20:35] <arundelo> That depends on the ratio of winning to losing tickets,
though, right?
[20:35] <jimrandomh> Those numbers (80% and 20%) aren't accuracies, just
probabilities
[20:36] <MrHen> Yes
[20:36] <jimrandomh> Right, the original ratio of winning to losing tickets is
the prior probability
[20:36] <Byrnema> (Sorry, please ignore the breast cancer test terminology, this
is much better: http://en.wikipedia.org/wiki/Sensitivity_and_specificity)
[20:36] <MrHen> Okay, arundelo's question is good
[20:37] <MrHen> Does the amount of information bits we get from a box depend on
how many winning and losing *tickets* are there?
[20:37] <jimrandomh> And the posterior probability is the ratio of winning to
losing tickets in the pile of tickets that made your box(es) blink
[20:37] <komponisto> No.
[20:38] <jimrandomh> Err, type error there, it's the ratio of winning to total
tickets, not winning to losing
[20:38] <Byrnema> Oh: the original ratio in MrHen's problem was 9 losing tickets
to 1 winning ticket.
[20:39] <MrHen> Komponisto, were you answering my question?
[20:39] <komponisto> Yes. :-)
[20:39] <MrHen> Cool, just checking
[20:39] <MrHen> :)
[20:39] == Cyan [~ae5849fc@gateway/web/freenode/x-cnwzliwkgjvixext] has joined
#lesswrong
[20:39] <jimrandomh> Hello
[20:39] <MrHen> So, to clear this up
[20:40] <jimrandomh> We were just discussing Bayes' rule, using the example from
the "How Much Evidence Does It Take" article
[20:40] <MrHen> If a box gives us 2 bits of information (as it does in the
article), it will give us 2 bits regardless of how many winning or losing
tickets there are
[20:40] <MrHen> If I changed the lottery so there were 9 losing tickets (instead
of 131,115,984) and 1 winning ticket
[20:41] <arundelo> Here's the terminology Byrnema looked up in terms of beeps
and tickets:
[20:41] <MrHen> EY's box would still give us 2 bits of information
[20:41] <arundelo> Sensitivity = probability of a beep given a winning ticket
[20:41] <arundelo> Specificity = probability of no beep given a losing ticket
[20:41] <MrHen> Cool
[20:41] <MrHen> So, is anyone currently confused?
[20:42] <Mitchell> you are allowed to talk about fractional bits in
communication theory.
[20:42] <MrHen> Communication theory?
[20:42] <jimrandomh> You mean information theory
[20:42] <Mitchell> http://en.wikipedia.org/wiki/Entropy_%28information_theory%29
[20:43] <MrHen> Ah
[20:43] <Mitchell> i guess i do. bits per event
[20:43] <MrHen> right, yeah
[20:43] <MrHen> fractional bits meaning 2.5 bits, right?
[20:43] <Mitchell> for example
[20:44] <jimrandomh> If you had a box that blinked all the time for winning
tickets and 1/2^2.5  of the time for losing tickets, that'd be 2.5 bits of
information
[20:44] <komponisto> Bits are just logarithms of likelihood ratios, so they can
take any value.
[20:44] <MrHen> Right
[20:44] <Mitchell> so if your box conveys a fraction of a bit per beep,
repeatedly using it will give you more information than you expect if you
rounded to an integer estimate for bits per beep
[20:45] <MrHen> Right
[20:45] <MrHen> Since we are working with homemade examples, integers are easier
:)
[20:46] <Mitchell> i thought you had an example where you were rounding, but
maybe i was wrong
[20:46] <MrHen> No, I kept switching 1 and 2 because I was dumb
[20:46] <MrHen> The article had an example of rounding
[20:46] <Byrnema> Anyway, the idea is that the box winnows your set of possible
winning tickets by some factor. You answer the question: this factor is 2 to
which power? and that's your bits.
[20:47] <MrHen> Yes
[20:47] <komponisto> And there's nothing special about 2; you could use base 10
('bels') if you wish.
[20:47] <jimrandomh> (If you asked "10 to which power", you'd get bans instead.
10dB=1 ban)
[20:48] <Byrnema> Ha! You guys are a treasure-trove of information.
[20:49] <jimrandomh> Let's do some estimation problems with probabilities
[20:49] <MrHen> :)
[20:49] <Cyan> +2 karma to the person who can say what bans are named for. Offer
is time-limited to 8 seconds due to Google.
[20:49] <Byrnema> What about the problem of low sensitivity. Let's consider that
in more detail, because it's possible if there is only 1 wining ticket that
you've missed it.
[20:50] <_kevin___> banburismus procedure
[20:50] <_kevin___> that must've been more like 25 seconds though
[20:50] <_kevin___> and i used google
[20:50] <Cyan> -2 karma!! (jk)
[20:50] <_kevin___> and i should have continued another sentence in wikipedia
[20:50] <jimrandomh> Well, if we have a box that blinks .8 of the time for
winers and .2 of the time for losers, then you're throwing away .2 of the
original ticket pool every time you use it
[20:50] <_kevin___> as the answer is the town of banbury
[20:51] <Byrnema> And once you've thrown it away, it doesn't matter how much
evidence you have to keep winnowing the pool of losers!
[20:51] <komponisto> Do people who say "bans" say "decibans" as well? (I sure
hope so!)
[20:51] <_kevin___> (nice trivia question, though)
[20:51] <jimrandomh> Yes, they do
[20:51] <MrHen> The trick is that each box has a chance to beep on the winner
[20:51] <komponisto> phew! :-)
[20:52] <jimrandomh> in fact, dB being an acronym for both decibels and decibans
is pretty much the only reason the ban ever caught on
[20:52] <MrHen> At least, I think that is the trick
[20:52] <jimrandomh> So if you need n tests to find the right ticket and you're
throwing away 20% of the tickets each time, then you'll have thrown it away with
probability (1-.2)^n
[20:53] <Byrnema> Nevermind, if you have lower sensitivity it just means you
have to keep testing your no-beep numbers from the last batch in subsequent
tests..
[20:53] <Mitchell> exactly. if you throw away something which only MIGHT be a
loser, you are actually throwing away some of your information
[20:53] <arundelo> Assume sensitivity = 0.8, specificity = 0.2, number of
winning tickets = 1, number of losing tickets = 9.  The box has a 0.26 chance of
beeping ((1/10) * 0.8 + (9/10) * 0.2) and therefore a 0.74 chance of not
beeping.  I think this means that a beep gives us about 2 bits of information
and the lack of a beep gives us less than 1 bit, because a beep "inform[s] you
of an outcome whose probability was [approximately 1/4]" and the lack of a beep
"inform[s] you
[20:53] <MrHen> Wait, what is specificity?
[20:54] <MrHen> [20:41] <arundelo> Specificity = probability of no beep given a
losing ticket
[20:54] <MrHen> Wouldn't that be .8 in this example?
[20:54] <Byrnema> yes
[20:55] <arundelo> I don't think that's how I was figuring it.
[20:55] <Byrnema> we really want the word for (1 minus specificity)
[20:55]  * MrHen travels to wikipedia
[20:56] <jimrandomh> false negative rate
[20:56] <Cyan> 1 - specificity == "prob. of false positive"
[20:56] <jimrandomh> err, wrong one, sorry
[20:56] <jimrandomh> false positive rate
[20:56] <arundelo> Ah, maybe I did switch those around.
[20:57] <MrHen> Oh, okay, the wiki link from earlier makes sense
[20:57] <MrHen> So... where are we in the example?
[20:57] <Cyan> It can be helpful to learn the false/true positive/negative
jargon, as it applies to more than just this scenario.
[20:58] <MrHen> Okay, so when a box beeps 80% of the time for a winning ticket
[20:59] <MrHen> Is that 80% specificity or sensitivity?
[20:59] <arundelo> Sorry, my arithmetic for "0.26 chance of beeping ((1/10) *
0.8 + (9/10) * 0.2)" only makes sense if 0.2 is the probability of a beep given
a losing ticket, which would make the sensitivity 0.8.
[20:59] <Cyan> it's sensitivity
[20:59] <MrHen> And beeping 20% for a losing ticket is specificity?
[21:00] <Cyan> false positive rate; specificity is 80% (true negative rate)
[21:00] <MrHen> Okay
[21:00] <arundelo> *Not* beeping 20% for a losing ticket is 20% specificity.
[21:00] <MrHen> Yeah, that makes sense
[21:00] <arundelo> So a completely accurate test would have 100% sensitivity and
100% specificity.
[21:00] <Byrnema> Right, it helps to think about it in terms of the optimistic
biologist. They want to design a test with HIGH sensitivity and HIGH
specificity.
[21:01] <MrHen> Cool
[21:01] <Byrnema> Can we talk about the real life context? For example, in real
life, I think we're used to 'boxes' that yield really high bits.
[21:01] <MrHen> Hmm...
[21:02] == crockerrules [~505dfe0b@gateway/web/freenode/x-nafjtzphgicamfvn] has
joined #lesswrong
[21:03] <MrHen> All of the examples I am thinking of involve games of some sort.
Like poker.
[21:03] <jimrandomh> If an answer to a sample problem is given in this channel,
how many bits of evidence is it that that answer is correct?
[21:04] <Cyan> 2 bits, I'd say.
[21:04] <jimrandomh> Sounds about right
[21:04] <MrHen> So... what is the first step?
[21:04] <Byrnema> What if I think it's 50/50. How many bits is that? I think the
question is: compared to what? A random solution has a near-0 chance of being
correct..
[21:04] <komponisto> I wasn't particularly familiar with this terminology, so
let me see if I've got it right: P(E|H) is "sensitivity"? P(not-E|not-H) is
"specificity"?
[21:04] <jimrandomh> If we want to get the reference answer, someone has to go
through the log and count up all the answers and mistakes when we're done
[21:05] <Cyan> @komponisto: yup
[21:05] <jimrandomh> Cyan's answer is at least pretty close
[21:05] <komponisto> aha, thanks.
[21:06] <MrHen> Okay, so if I wanted to actually come up with numbers instead of
just guessing "2 bits" what do I do?
[21:06] <arundelo> I was about to say "sensitivity and specificity would never
go below 50%, because if they did, you'd just flip the sense of the test's
output", but that's not true.  You might have a box that beeps for 100% of
winning tickets and 75% of losing tickets, so its specificity would be 25%.  So
one of sensitivity and specificity will always be 50% or above, but not
necessarily both.
[21:06] <MrHen> Using the phrasing of the example, what is a winning ticket?
[21:06] <arundelo> Byrnema: 50/50 is 1 bit.
[21:06] <Cyan> A correct answer.
[21:07] <Cyan> that was for MrHen.
[21:07] <MrHen> And what is the beeping box?
[21:07] <MrHen> Someone who agrees with the answer?
[21:07] <jimrandomh> Right
[21:07] <Cyan> Byrnema & arundelo: 50/50 is 0 bits.
[21:07] <MrHen> So, for every person who agrees with a proposed answer, we get
some amount of bits
[21:08] <MrHen> And every person who doesn't agree we get some amount of bits
[21:08] <Cyan> I assume that meant sens 50% and spec 50%; otherwise I could be
mistaken.
[21:08] <MrHen> And for* every person...
[21:08] <Byrnema> cyan and arundelo: not so fast! it depends on what I would
assign to a *random* channel
[21:08] <arundelo> Maybe I misunderstood -- I took it to mean, if someone flips
a coin and tells me it's heads, how many bits did they give me?  (1.)
[21:09] <Cyan> @byrnema: I guess we need to clarify the situation.
[21:09] <jimrandomh> No, they only spoke one bit of language, but that was worth
much more than one bit of evidence
[21:09] <arundelo> If on the other hand, if someone flips a coin then flips a
second coin to determine whether to lie or not when they tell me what the first
coin was, then they've given me 0 bits.
[21:09] <jimrandomh> it takes you from a probability of .5 that the coin was
heads to a probability more like .999 that the coin was heads
[21:10] <MrHen> jimrandomh: You are answer a different question
[21:10] <MrHen> The phrasing is coming from the article:
[21:10] <MrHen> For example, if there are four possible outcomes A, B, C, and D,
whose probabilities are 50%, 25%, 12.5%, and 12.5%, and I tell you the outcome
was "D", then I have transmitted three bits of information to you, because I
informed you of an outcome whose probability was 1/8.
[21:11] <MrHen> If I told you the outcome was "A" than 1 bit was transferred
[21:12] <jimrandomh> Right
[21:12] <MrHen> This has nothing to do with the probabilities of a box beeping
[21:12] <MrHen> Which is the confusing part
[21:12] <Byrnema> The lottery example was really convenient because it's so
ideally random - a priori. In real life, we're always working with things that
already have TONS of information already packed into them.
[21:12] <MrHen> That example is merely defining what a "bit" means
[21:13] <jimrandomh> It's worth noting here that compression programs work by
defining a probability distribution over possible files, and giving you just
enough evidence to uniquely identify one file
[21:13] <MrHen> Byrnema: Right, but that information can be given a probability
[21:13] <MrHen> In the question of whether someone gave a correct answer here
[21:13] <MrHen> We can weigh each agreement as a probability
[21:14] <crockerrules> jimrandomh: evidence or information? Should evidence be
measured in decibels, rather than bits (information)
[21:15] <MrHen> So, p(Answer correct|Someone agrees) could be 95%
[21:15] <MrHen> This is my beeping box
[21:15] <jimrandomh> Right
[21:15] <jimrandomh> My original question was how much evidence we've gotten
about what the answer is
[21:16] <Cyan> MrHen: to place the "correct answer" problem in the sens/spec
framework, we need an arbiter who can go through and say if an answer was
correct or not.
[21:16] <jimrandomh> And that's actually a trick question of sorts, because
there can be a huge space of possible answers, each of which is given with a
probability like .001
[21:17] <jimrandomh> If someone says the answer is .2, then we've chosen it out
of an answer space that includes .1, .3, pi/2, 1/3^^^3, etc
[21:17] <Byrnema> why is the space of possible answers so small? (order 1000)
[21:17] <MrHen> Cyan: Would this be like waiting to see which ticket ended up
winning?
[21:17] <Cyan> MrHen: yes.
[21:17] <jimrandomh> but we know 1/3^^^3 is a priori very unlikely
[21:17] <MrHen> Cyan: So would the arbiter be needed to calculate how many bits
each agreement gives us?
[21:18] <_kevin___> i have to leave in a minute, but mrhen, you mentioned
examples in poker. what were you thinking of? my poker playing is almost
exclusively using frequency probabilities. and from a quick search, bayesian
probability in poker seems to be used mainly by AIs or software tools
[21:18] <jimrandomh> P(X is bluffing|X has betting history Y)
[21:18] == JaredW [~JaredW@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined
#lesswrong
[21:19] <Cyan> MrHen: the arbiter is needed so that we can assign answers as
correct (true positives) or incorrect (false positives). I'm treating different
individuals as indistinguishable -- and I seem to be assuming one answer per
question. hmm.
[21:19] <jimrandomh> You can measure that, although you don't want to do so by
looking at only hands that went through to the end and had cards revealed, since
that's a skewed sample
[21:20] <arundelo> "Mathematician's bits are the logarithms, base 1/2, of
probabilities."  I believe that the log base 1/2 of x can be computed like so:
"log(x)/log(1/2)".  (Google's calculator understands this.)
[21:20] <jimrandomh> We don't just need an arbiter to decide which answer is
correct, we need a probability distribution over all probable answers
[21:20] <jimrandomh> I mean, over all possible answers
[21:20] <Byrnema> I realize I'm probably over-killing with this, but even
knowing the answer is a *number* already contains an unknown bels of
information.
[21:20] <MrHen> _kevin___: I don't have anything particularly in mind anymore
[21:21] <Cyan> jimrandomh: 'strue. I was coarsening the set of possibilities to
"correct" and "incorrect" for simplicity's sake.
[21:21] <MrHen> _kevin___: I was mostly thinking of simple math problems that
could probably have been answered with frequency probabilities
[21:22] == Warrigal [~warrie@xxxxxxxxxxxx] has joined #lesswrong
[21:22] <Warrigal> So, I'd like to be the gatekeeper in an AI box experiment.
[21:22] == JaredW [~JaredW@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has quit [Read
error: Connection reset by peer]
[21:22] <jimrandomh> To make an analogy - if your box blinks with probability .8
for winning lottery tickets, probability .2 for losing lottery tickets, and
probability 0 for grocery store receipts, then the bigger the pile of paper you
give it, the more "evidence" it's giving you - but some of it is evidence that
you could've gotten from another, correlated source
[21:22] <Warrigal> Not because I think I would withstand it, but simply for the
experience.
[21:23] == JaredW [~JaredW@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has joined
#lesswrong
[21:23] <MrHen> Warrigal: It took me a while to finally think of a strategy that
would get me out of the box as an AI
[21:23] <MrHen> Warrigal: But I have one now
[21:23] <jimrandomh> Similarly, someone saying that the answer to a question is
a number is giving you "evidence about" that answer; but it's worthless
(duplicate) evidence if you already knew that
[21:23] <_kevin___> jim, yeah, that's all i can think of and that's what the
software tools and AIs keep track of
[21:24] <jimrandomh> You mean the "Let me out or I'll eat your soul" strategy?
[21:24] <MrHen> no
[21:24] <MrHen> how would that even work?
[21:24] <Warrigal> jimrandomh: hey, we need spoiler warnings.
[21:24] <Warrigal> Or so the first part of this blog post says:
http://rondam.blogspot.com/2008/10/reflections-on-being-ai-in-box.html
[21:25] <Warrigal> MrHen: does that mean you want to try it right now?
[21:25] <komponisto> arundelo: technically we want logarithms of odds,  right?
That way ignorance (50% probability) corresponds to 0 bits.
[21:25] <MrHen> How long would we have to chat?
[21:25] <_kevin___> whoa, you confidently think you have a strategy MrHen?
[21:25] <MrHen> It might be better to set up a different time
[21:25] <MrHen> yeah
[21:25] <_kevin___> most people don't want to play the AI
[21:25] <_kevin___> but anyone would play gatekeeper
[21:25] <MrHen> I don't see how you wouldn't let me out, actually
[21:25] <jimrandomh> I was referring to the strategy given in /box
[21:25] <_kevin___> what if the person precommits to saying no no matter what?
[21:25] <Warrigal> Well, two hours is the standard amount of time, and I have
two hours to spare.
[21:25] <jimrandomh> didn't mean to hit enter there
[21:26] <_kevin___> i think that's the strategy that defeated eliezer his final
two tries
[21:26] <Byrnema> um, what's going on? at 9:30, did we begin a different study
session?
[21:26] <jimrandomh> I was referring to the strategy given in
"http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/"
[21:26] <MrHen> haha
[21:26] <MrHen> sort of
[21:26] <MrHen> Warrigal derailed our topic
[21:26] <MrHen> sorry
[21:26] <MrHen> back on topic
[21:26] <Warrigal> Thank you for apologizing for my action. :P
[21:26] <MrHen> well, I answered you
[21:26] <_kevin___> probability problems are boring anyways. i am going to a
bar. you people enjoy yourselves
[21:26] <_kevin___> keep it real. :P
[21:26] <MrHen> cheers
[21:26] <MrHen> thanks for coming
[21:27] <jimrandomh> Which is, if an AI in a box manages to get enough
information and computational power to simulate the gatekeeper outside the box,
it can threaten to create and torture copies of him
[21:27] <MrHen> so, before we really stop talking
[21:27] <arundelo> komponisto: do you know what the formula for that would be?
[21:27] <MrHen> does anyone have a question about bits that is still bugging
them?
[21:27] <Mitchell> [13:03] <jimrandomh> If an answer to a sample problem is
given in this channel, how many bits of evidence is it that that answer is
correct?
[21:28] <MrHen> jimrandomh: No, I don't buy that argument. I wouldn't let such
an AI out. The strategy I have is simple enough /anyone/ playing the gatekeeper
would let me out
[21:28] <Warrigal> MrHen: so, when do you want to do this?
[21:28] <komponisto> arundelo: log (P(A)/P(not-A))
[21:28] <jimrandomh> The amount of information you get is the length of the
answer, but the trick is that most of that information duplicates information
you already had
[21:28] <MrHen> Warrigal: Not until after we're done talking about the bits :)
[21:29] <MrHen> Warrigal: We could set up a time next week if you don't want to
wait around
[21:29] <_kevin___> remember that the chat log can be public if you both agree!
[21:29] <jimrandomh> The amount of *non-duplicate* information you got is the
ratio between the probability that the poster is correct, and the probability
you originally assigned to that answer
[21:29] <Warrigal> Might we begin within the next half hour?
[21:29] <komponisto> Incidentally, I'm curious about something: lots of people
seem to find concrete examples helpful for understanding these probability
concepts, but I don't.
[21:29] <MrHen> Warrigal: Hmm... 30 minutes would be 10pm here... so 2 hours is
12am...
[21:30] <komponisto> I find it much easier to think abstractly. Is anyone else
like me?
[21:30] <Byrnema> no
[21:30] <jimrandomh> This is a public channel; you don't need participants'
consent to publish the log
[21:30] <MrHen> Warrigal: Honestly, I don't think I will be able to think well
for that long
[21:30] <MrHen> Warrigal: I would probably fall asleep. :D
[21:30] <Warrigal> MrHen: do you want to schedule a time now or later?
[21:31] <MrHen> Warrigal: Send me a PM at LessWrong and we can work something
out. I am not willing to bet money, though.
[21:31]  * Warrigal nods.
[21:31] <MrHen> Cool
[21:32] <jimrandomh> If this is to be recurring, it should probably be at the
same time and day of the week each time
[21:33] <MrHen> komponisto: I like both concrete and abstract examples. I
abstract pretty well, so I tend to abstract the concrete examples automatically
[21:33] <MrHen> Yeah, I thought this was cool and would be willing to do it
again
[21:33] <MrHen> It is helpful having people to talk to
[21:33] <Mitchell> if warrigal lets the AI out of the box, how many bits of
evidence is it that he would let the AI into his bank account?
[21:34] <Warrigal> "Out of the box" means "into the world and everything in it",
so lots.
[21:34] <MrHen> okay, so the mathy question here is p(AI gets bank|AI gets out)
[21:35] <MrHen> no, actually, that *isn't* the question
[21:35] <jimrandomh> A friendly AI wouldn't steal bank accounts, and Warrigal
letting an AI out of the box is pretty good evidence for friendliness
[21:35] <jimrandomh> Assuming he's diligent
[21:35] <Warrigal> You can't diligently let an AI out of the box.
[21:35] <MrHen> What if the AI promises to use the money to help Warrigal?
[21:36] <MrHen> And gives back 1000% interest?
[21:37] <Mitchell> this seems to be a legitimate question for all A and B, no
matter how wacky: if A, how many bits of evidence is it that B?
[21:37] <arundelo> komponisto: so if I know nothing about a coin flip, I have 0
bits of information.  But if I catch a glimpse of the coin and now think it's
75-to-25 odds of it being heads, then it seems like your formula tells me I have
a negative amount (-1.5849625) of bits!  Am I doing something wrong? 
<http://www.google.com/search?q=log%280.75/0.25%29/log%281/2%29>.
[21:37] <komponisto> arundelo: take log to base 1/2.
[21:38] <Warrigal> Mitchell: I am thinking of two statements, A and B. I will
not tell you anything about them, nor will I clarify this question. If A, how
many bits of evidence is it that B?
[21:38] <MrHen> Mitchell: Right
[21:38] <arundelo> komponisto: That should be what I did: log(x)/log(1/2).
[21:38] <MrHen> Mitchell: That is the question I am curious about now
[21:39] <Mitchell> um, are red messages private? i'm seeing red from two people.
:)
[21:39] <MrHen> It depends on the client
[21:39] <Warrigal> Mitchell: this message is not private.
[21:39] <MrHen> when I do this:
[21:39] <MrHen> Mitchell: Hi
[21:39] <MrHen> The web client will beep at you because I used your name
[21:40] <komponisto> arundelo: take log base 2 :-)
[21:40] <komponisto> sorry
[21:40] <AngryParsley> whoa people talked in here
[21:40]  * crockerrules assigns +1 to Warrigal for quick and simple explanation
[21:40] <MrHen> AngryParsley:
http://lesswrong.com/lw/1pp/open_thread_february_2010/1n2m?context=3
[21:40] <Cyan> Mitchell: -log(Pr(B|A,X)) (X is prior information)
[21:40] <Mitchell> warrigal: if i don't know what the statements are, it's sort
of like saying what is x+y, without knowing x or y
[21:41] <AngryParsley> central time? O_o
[21:41] <Warrigal> Mitchell: Bayesians assign probabilities to any statement
whatsoever, regardless of how much they know about them.
[21:41] <MrHen> Yeah
[21:41] <MrHen> I live in Texas
[21:41] <MrHen> And... I don't know where Byrnema lives, but its CST too
[21:41] <Cyan> komponisto: I do best with concrete examples to abstract from. My
boss tells me he prefers to work abstract from the get-go.
[21:42] <komponisto> Oh, arundelo: sorry, I don't know what I was thinking....
[21:42] <komponisto> Sign of bits tells you whether the evidence is for or
against your hypothesis.
[21:42] <Mitchell> warrigal: in that case cyan has your answer. but can we apply
cyan's formula to jimrandomh's question?
[21:42] <MrHen> Cyan:What does Pr(x) mean? Is it the same as p(x)?
[21:43] <Warrigal> We shouldn't speak in terms of time zones; we should speak in
terms of when midnight is in different places. Where I live, midnight is 05:00.
[21:43] <Mitchell> apply it and actually get a number, i mean.
[21:43] <MrHen> I live in -6
[21:43] <komponisto> Cyan,Mr. Hen: Interesting. I naturally find it hard not to
be distracted by clutter when thinking concretely. That's why I have to work
very hard not to get tripped up by Monty Hall/EY's two-aces, etc.
[21:43] <Warrigal> komponisto: then you should get some more practice thinking
concretely.
[21:44] <komponisto> But the mathematical form of Bayes' Theorem itself,
posterior = prior*strength-of-evidence, seems perfectly intuitive to me.
[21:44] <komponisto> Warrigal: very true! I've been working on that.
[21:44] <Cyan> MrHen: Jaynes emphasizes that all probabilities are conditional
on some prior information, which I label  X. X is everything we assume at the
start. Therefore Pr(X) is 1 by definition.
[21:44] <arundelo> komponisto: I've been meaning to ask you, how well do you
speak Esperanto.
[21:44]  * Warrigal substitutes the Ps into that.
[21:44] <MrHen> komponisto: Yeah, I can understand that. I see the clutter as
clutter. It makes it easier when filtering for fallacies
[21:45] <Warrigal> P(B|A) = P(B) * P(A|B) / P(A), right?
[21:45] <MrHen> Cyan: I mean the terminology. Does Pr(A|B) mean p(A|B)? In other
words, does the "r" mean anything?
[21:45] <Cyan> warrigal: right.
[21:46] <komponisto> arundelo: not as well as I used to; haven't spoken it in
quite a while. Passive knowledge still pretty good.
[21:46] <Cyan> MrHen: Ah, sorry. No, the r doesn't mean anything.
[21:46] <AngryParsley> I don't get that blog post about the AI box. what's so
super special about the technique?
[21:46] <MrHen> Cyan: Gotcha. What is the comma for? Is Pr(A|B,C) the same as
Pr(A|B&C)?
[21:47] <komponisto> Ocassionally I read stuff online; it comes back pretty
quickly.
[21:47] <arundelo> Mi parolas gxin flue, kvankam mi ne multe ekzercas min pri
gxi lastatempe.
[21:47] <MrHen> AngryParsley: I didn't read it because I am planning on doing
the experiment
[21:47] <Cyan> MrHen: pretty much.
[21:47] <Warrigal> It might be useful to define the "evidence ratio" or
something as P(B|A) / P(B) = P(A|B) / P(A) = P(A and B) / (P(A) * P(B)).
[21:47] <MrHen> Cyan: Okay, cool. I learned probability and logic from a
completely different area of topics so the punctuation still throws me off
[21:49] <MrHen> Cyan: So with "-log(Pr(B|A,X)) (X is prior information)", how do
we do the lottery example?
[21:49] <Cyan> warrigal: komponisto's "posterior = prior*strength-of-evidence"
is a bit underspecified. For full applicability, I'd write "posterior odds =
prior odds * likelihood ratio"
[21:50] <MrHen> -log(Pr(Beep|Winning ticket, Chance to beep on winning ticket))?
[21:50] <Warrigal> And then what I just gave is the likelihood ratio.
[21:50] <MrHen> or -log(Pr(Winning ticket|Beep, Chance to beep on winning
ticket))?
[21:50] <MrHen> or ... ?
[21:50] <komponisto> arundelo: Same, krom "parolis" anstataux ol "parolas". :)
[21:51] <komponisto> Cyan: my formulation was meant to encompass the probability
and odds forms at once.
[21:52] <Cyan> komponisto: That was too abstract for me, I guess. ;-)
[21:52] <komponisto> Ha! Didn't even think of that. But you see what I mean. :-)
[21:53] == papermachine [~papermach@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] has
joined #lesswrong
[21:53]  * MrHen ponders his unanswered question
[21:53] <Cyan> MrHen: -log(Pr(Beep|Winning ticket, X), - log(Pr(Beep|Losing
ticket, X), where X is the specification of sensitivity and specificity.
[21:53] <MrHen> Hmm...
[21:53] <arundelo> I originally read the "How Much Evidence Does It Take?"
article a while ago (probably right after Eliezer posted it) and for me the most
memorable thing was seeing concretely how poor lottery-type odds are: even with
the box, your odds of winning are still very low.
[21:54] <Cyan> MrHen: Ahh! Ignore the comma between the log-probability terms.
[21:54] <MrHen> Cyan: So is that whole thing one formula? Or two?
[21:55] <Cyan> MrHen: Also, I left off parentheses. Let me try again:
-log(Pr(Beep|Winning ticket, X)) - log(Pr(Beep|Losing ticket, X))
[21:55] <MrHen> Haha, okay
[21:56] <MrHen> blarg. I am getting too tired to read that.
[21:56] <MrHen> Hang on, processing...
[21:56] <Cyan> I'm getting too tired to write it properly...
[21:56] <MrHen> :)
[21:56] <MrHen> Okay, so how do I describe X? If sensitivity and specificity are
both 80%?
[21:56] <Mitchell> i have to go. but now i can add 'feb 2010, attended lesswrong
webinar' to my cv
[21:57] <Cyan> One more time: -log(Pr(Beep|Winning ticket, X)) +
log(Pr(Beep|Losing ticket, X)), logs are to base two
[21:57] == Mitchell [~82669e0c@gateway/web/freenode/x-tuftiwftjyxddepe] has quit
[Quit: Page closed]
[21:57] <MrHen> haha
[21:57] <crockerrules> Why is the bit representation useful? What happens if I
get 2 bits of evidence from observation O1 and another bit from O2 (unrelated)?
Do I get to add them?
[21:57] <Cyan> crockerrules: yes, and that's the principle advantage
[21:57] <MrHen> if they are independent events, yeah
[21:57] <crockerrules> Ah, awesome.
[21:58] <AngryParsley> depends on whether they're independent
[21:58] <AngryParsley> beaten
[21:58] <MrHen> :)
[21:58] <Cyan> I took "unrelated" to mean "independent"... maybe I shouldn't?
[21:58] <MrHen> Cyan: I will have to read that tomorrow. I'll send you a PM on
LessWrong if I have more questions
[21:58] <crockerrules> You should.
[21:58] <crockerrules> Couldn't think of the proper term.
[21:59] <MrHen> right
[21:59] <MrHen> I used independent so anyone else reading it would understand
[21:59] <MrHen> I figured that's what you meant. :)
[21:59] <crockerrules> And those logs are base .5? Because hypothesis is either
true or false, or for some other reason?
[22:00] <MrHen> it is in bits for no reason other than it makes it easier to
talk about
[22:00] <crockerrules> Sorry, I joined the conversation (and LW in general) late
[22:00] <MrHen> it's all good
[22:01] <MrHen> From earlier:
[22:01] <MrHen> [20:47] <komponisto> And there's nothing special about 2; you
could use base 10 ('bels') if you wish. [20:47] <jimrandomh> (If you asked "10
to which power", you'd get bans instead. 10dB=1 ban)
[22:01] <MrHen> I can save a transcript of everything and drop it somewhere
[22:01] <MrHen> Someone sometime may find it useful. :)
[22:01] <crockerrules> Aaah. Makes perfect sense, thanks-- Please do
(transcript)
[22:02] <Cyan> Please remove all of my stupidity. :-p
[22:02] <MrHen> Cool. Well, I need to head to bed
[22:02] <MrHen> Byrnema, are you still here?
[22:02] <MrHen> Should we set up another one for next week?
[22:02] <Byrnema> Agreed, the discussion turned up quite a few salient and
interesting points about 'bits'.
[22:03] <MrHen> I enjoyed it
[22:03] <komponisto> Is it over?
[22:03] <MrHen> Not necessarily, but I need to head out
[22:03] <MrHen> We chatted for almost 2 hours
[22:03] <Byrnema> I think it would be useful to somehow connect this with the
post, so someone could read it from there if they wanted to. Can we link to this
as a comment under the post?
[22:04] <MrHen> I will save a transcript and than post a link as a comment
[22:04] <Byrnema> Me too, time for me to go. But sure, same time next week
sounds good.
[22:04] <MrHen> I can leave the browser open if people still want to chat
[22:04] <MrHen> What should we talk about next week?
[22:04] <komponisto> Very well, then. Do folks do this regularly? It seems very
useful to have regular  "Bayesian tutoring" meetings.
[22:04] <MrHen> This is the first I have been a part of
[22:05] <MrHen>
http://lesswrong.com/lw/1pp/open_thread_february_2010/1n2m?context=3
[22:05] <MrHen> See that thread
[22:05] == AngryParsley [~AngryPars@unaffiliated/angryparsley] has left
#lesswrong []
[22:05] <Cyan> It's my bedtime too. G'night.
[22:05] == AngryParsley [~AngryPars@unaffiliated/angryparsley] has joined
#lesswrong
[22:05] <MrHen> Cheers
[22:05] <MrHen> thanks for coming