All of GreedyAlgorithm's Comments + Replies

The Informations told/implied to the Humans that they don't lie or withold information. That is not the same as the Humans knowing that the Informations don't lie.

Brian, you want an answer to the real-world situation? Easy. First assume you have a source of inputs that is not antagonistic, as discussed. Then measure which deterministic pivot-choice algorithms would work best on large samples of the inputs, and use the best. Median-of-three is a great pivot choosing algorithm in practice, we've found. If your source of inputs is narrower than "whatever people anywhere using my ubergeneral sort utility will input" then you may be able to do better. For example, I regularly build DFAs from language data. Part... (read more)

Brian, the reason we do that is to avoid the quicksort algorithm being stupid and choosing the worst-case pivot every time. The naive deterministic choices of pivot (like "pick the first element") do poorly on many of the permutations of the input which are far more probable than 1/n! because of the types of inputs people give to sorting algorithm, namely, already or nearly-already sorted input. Picking the middle element does better because inputs sorted inside to outside are rarer, but they're still far more likely than 1/n! apiece. Picking a r... (read more)

If they're both about equally likely to reason as well, I'd say Eliezer's portion should be p * $20, where ln(p/(1-p))=(1.0*ln(0.2/0.8)+1.0*ln(0.85/0.15))/(1.0+1.0)=0.174 ==> p=0.543. That's $10.87, and he owes NB merely fifty-six cents.

Amusingly, if it's mere coincidence that the actual split was 3:4 and in fact they split according to this scheme, then the implication is that we are trusting Eliezer's estimate 86.4% as much as NB's.

"But sometimes experiments are costly, and sometimes we prefer to get there first... so you might consider trying to train yourself in reasoning on scanty evidence, preferably in cases where you will later find out if you were right or wrong. Trying to beat low-capitalization prediction markets might make for good training in this? - though that is only speculation."

Zendo, an inductive reasoning game, is the best tool I know of to practice reasoning on scanty evidence in cases where you'll find out if you were right or wrong. My view of the game... (read more)

Here's what I was missing: the magnitudes of the amplitudes needs to decrease when changing from one possible state to more than one. In drawing-on-2d terms, a small amount of dark pencil must change to a large amount of lighter pencil, not a large amount of equally dark pencil. So here's what actually occurs (I think):

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from E to A (-1/sqrt(2),0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from A to B (0,-1/2) A photon is coming from A to C... (read more)

2A1987dM
What I was about to say. It really doesn't matter yet, but it's better to get the reader used to unitarity straight away. (Though I wouldn't explicitly mention unitarity this early -- I'd just replace the rule with "Multiply by 1/sqrt(2) when the photon goes straight, and multiply by i/sqrt(2) when the photon turns at a right angle" and everything that follows from that. If the maths gets too complicated with all those denominators, just make the initial amplitude -sqrt(2) rather than -1.)

Okay, what happens in this situation: Take figure 2. The arrow coming in from the left? Replace it with figure 1, with its mirror relabeled E and detector 2 removed (replaced with figure 2). And lengthen the distance to detector 1 so that it's equal to the total distance to detector 2 in figure 2. And I guess call the detector 1 in figure 2 "X" for "we know you won't be getting any amplitude". Now what? Here's what I get...

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from E to A (-1,0)

A phot... (read more)

The only way I can see p-zombieness affecting our world is if

a) we decide we are ethically bound to make epiphenomenal consciousnesses happier, better, whatever; b) our amazing grasp of physics and how the universe exists leads our priors to indicate that even though it's impossible to ever detect them, epiphenomenal consciousnesses are likely to exist; and c) it turns out doing this rather than that gives the epiphenomenal consciousnesses enough utility that it is ethical to help them out.

Lee,

I'd assume we can do other experiments to find this out... maybe they've been done? Instead of {98,100}, try all pairs of two numbers from 90-110 or something?

Anon, Wendy:

Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.

Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?

Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.

You mi... (read more)

0josinalvo
http://en.wikipedia.org/wiki/Prisoner%27s_dilemma#The_iterated_prisoners.27_dilemma (just an example of such a disconnect, not a general defence of disconects)

Long run? What? Which exactly equivalent random events are you going to experience more than once? And if the events are only really close to equivalent, how do you justify saying that 30 one-time shots at completely different ways of gaining 1 utility unit is a fundamentally different thing than a nearly-exactly-repeated game where you have 30 chances to gain 1 utility unit each time?

I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.

1wuthefwasthat
http://en.wikipedia.org/wiki/Arbitrage :)

Tom: What actually happens under your scenario is that the naive human rationalists frantically try to undo their work when they realize that the optimization processes keep reprogramming themselves to adopt the mistaken beliefs that are easiest to correct. :D

Caledonian: please define meta-evidence, then, since I think Eliezer has adequately defined evidence. Clear up our confusion!

Selfreferencing: unfortunately there is an enormous gulf between "most theists" and "theistic philosophers". If you don't believe this then you need to get out more. Perhaps in the U.S. South, for instance. It might be irritating that most theists are not as enlightened as you are, but it is a fact, not a caricature.

I'm pretty sure, for example, that almost everyone I grew up with believes what a divine command theorist believes. And now that I look back at the OP and your comment, I notice that in the former Eliezer continually says "religious fundamentalists" and in the latter you continually say "theistic philosophers", so maybe you already recognize this.

To stay unbiased about all of the commenters here, do not visit this link and search the page for names. (sorry, but - wait no, not sorry)

So it seems to me that the smaller you can make a quine in some system with the property that small changes in it mean it produces nearly itself as output, the more likely that system is going to produce replicating evolution-capable things. Or something, I'm making this up as I go along. Is this concept sensical? Is there a computationally feasible way to test anything about it? Has it been discussed over and over?

Maybe... (read more)

Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.

On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibi... (read more)

Ha, this just happened to me. Luckily it wasn't too painful because I knew the weakness existed, I avoided it, and then reading E. T. Jaynes' "Probability Theory: The Logic of Science" gave me a different and much better belief to patch up my old one. Also, thanks for that recommendation. A lot.

For a while I had been what I called a Bayesian because I thought the frequentist position was incoherent and the Bayesian position elegant. But I couldn't resolve to my satisfaction the problem of scale parameters. I read that there was a prior that was i... (read more)

1momom2
You speak as if you have an insight, that you do not share, that I don't understand and would very much like to know. Could you please explain what you mean by "probability [is the] plausibility of situations given states of knowledge" as opposed to the reasoning in the paragraph just before?

Matthew C:

I don't understand why the Million Dollar Challenge hasn't been won. I've spent some time in the JREF forums and as far as I can see the challenge is genuine and should be easily winnable by anyone with powers you accept. The remote viewing, for instance, that I see on your blog. That's trivial to turn into a good protocol. Why doesn't someone just go ahead and prove these things exist? It'd be good for everyone involved. I see you say: "But for the far larger community of psi deniers who have not read the literature of evidence for psi, and... (read more)