Comment author: Cyan2 05 February 2008 11:51:35PM 3 points [-]

JonathanG,

Actually, the third bull is just straight up lying. (That's why Dmitriy called the puzzle silly.)

Comment author: Chalybs_Levitas 19 November 2011 01:13:26PM 0 points [-]

Oh, I assumed that they were walking in a circle and the third bull was counting both ahead of him and behind him, even though those bulls are both the same, on the assumption that 'single file' =/= 'straight line'.

Comment author: TGGP4 10 August 2008 02:58:13AM 4 points [-]

The whole history of civilization has shown that richer, smarter, better educated civilizations are more likely to agree about heaps that their ancestors disputed Are you saying there is in general more agreement among later civilizations so that disagreement should asymptotically approach zero? That would seem odd to me, because it conflicts with the fish, who have no disagreements at all. So then what does it mean?

Comment author: Chalybs_Levitas 19 November 2011 11:51:47AM 5 points [-]

The fish do not build heaps at all, and are therefore incapable of civilization or even meaningful disagreement on the correctness of heaps. So they should be excluded. (is what the PebbleSorter people might have thought)

Comment author: Chalybs_Levitas 19 November 2011 10:55:07AM 2 points [-]

I like that you specifically noted an exception in the case of foreign languages, as it was the one salient point I would have raised otherwise. Not that I think it might be the only point in contention to be raised, merely that it would be the only one I would have brought up, though. I wish you had emphasized it a little more heavily in your rhetoric, though that might just be my own biases in play.

In response to That Alien Message
Comment author: Will_Pearson 22 May 2008 08:03:32AM 0 points [-]

I'm reminded of his master's voice by stanislaw lem by this story, which has a completely different outcome to when humanity tries to decode a message from the stars.

Some form of proof of concept would be nice. Alter OOPS to use ockhams razor or implement AIXItl and then give it a picture of a bent piece of grass or three ball frames, and see what you get. As long as GR is in the hypothesis space it should by your reasoning be the most probable after these images. The unbounded uncomputable versions shouldn't have any advantage in this case.

I'd be suprised if you got anything like modern physics popping out. I'll do this test on any AI I create. If any of them have hypothesis like GR I'll stop working on them until the friendliness problem has been solved. This should be safe, unless you think it could deduce my psychology from this as well.

Comment author: Chalybs_Levitas 19 November 2011 09:08:54AM 0 points [-]

What if GR is wrong, and it does not output GR because it spots the flaw that we do not?

In response to Lawful Uncertainty
Comment author: Chalybs_Levitas 19 November 2011 07:55:39AM 0 points [-]

"There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws"

I've only just come to the Bayesian way of thought, so please direct me to the correct answer if I'm not thinking about this right:

If I and my opponent are of equal training, rationality, abilty, and intellect, except that my opponent has a 10% chance of doing something completely at odds with rationality as we both understand it due to some mental damage: how should I plan to face him?

If I have plan A to deal with his plan A, plan B to deal with his plan B, and so on (as close as I am capable of discerning them), is there a rational way to deal with this unpredictable element, and how do I determine how much of my resources to spend on this plan?

That is: how do I plan in the face of the unpredictable, especially in cases where I do not have the resources to cover every eventuality?

Comment author: Chalybs_Levitas 19 November 2011 06:44:58AM 2 points [-]

Hello, I am Alexander, and also a number of variations on Chalybs Levitas (depending on the screenname parameters of the site I'm signing up to).

I don't consider myself a rationalist, yet. I still have a lot to learn, but I've started working my way through the Sequences, and I've started my walk through the other articles, by opening a new tab at each new link.

I value language, and I am practicing my craft as a writer (I'm terrible) as well as studying Japanese (also terrible there).

I chose Japanese as the foreign language to study first in part because I want to move to Japan, and I've signed up to the site because one of the things I've learned through reading the articles and Mr. Yudowsky's fiction is that people are not pessimistic enough in preparing their plans. I tried to apply pessimism to my current plan to live in Japan, and I don't think I got it right. I'm hoping to learn more, and to work out answers I would not have found on my own, by talking with the community here.

Phew.

Nice meeting you all, ~Alexander