In response to Infinite Certainty
Comment author: Paul_Gowder 09 January 2008 08:13:52AM 7 points [-]

We can go even stronger than mathematical truths. How about the following statement?

~(P &~P)

I think it's safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true. And it's the precondition for any other knowledge (for no other reason than if you deny it, you can prove anything). I mean, there are logics that permit contradictions, but then you're in a space that's completely alien to normal reasoning.

So that's *lots* stronger than 2+2=4. You can reason without 2+2=4. Maybe not very well, but you can do it.

So Eliezer, do you have a probability of 1 in the law of non-contradiction?

Comment author: sullyj3 10 July 2016 04:15:56AM 0 points [-]

The truth of probability theory itself depends on non-contradiction, so I don't really think that probability is a valid framework for reasoning about the truth of fundamental logic, because if logic is suspect probability itself becomes suspect.

Comment author: casebash 05 January 2016 04:22:35AM *  2 points [-]

An update to this post

It appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn't end up drawing the conclusion that perfect rationality doesn't exist, so I believe this current thread adds something new.

Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to exit at some point. Stuart Armstrong also comes up with the same solution for demonstrating that this problem isn't related to unbounded utility.

Qiaochu Yaun summarises one of the key takeaways: "This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can't in general be computed by summing up the expected utility of each decision separately."

Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: "How can utility be maximised when there is no maximum utility? The answer of course is that it can't." This is incredibly close to stating that perfect rationality doesn't exist, but it wasn't explicitly stated, only implied.

Further, Wei Dai's comment on a randomised strategy that obtains infinite expected utility is an interesting problem that will be addressed in my next post.

Comment author: sullyj3 06 January 2016 06:13:57AM 0 points [-]

Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: "How can utility be maximised when there is no maximum utility? The answer of course is that it can't." This is incredibly close to stating that perfect rationality doesn't exist, but it wasn't explicitly stated, only implied.

I think the key is infinite vs finite universes. Any conceivable finite universe can be arranged in a finite number of states, one, or perhaps several of which, could be assigned maximum utility. You can't do this in universes involving infinity. So if you want perfect rationality, you need to reduce your infinite universe to just the stuff you care about. This is doable in some universes, but not in the ones you posit.

In our universe, we can shave off the infinity, since we presumably only care about our light cone.

Comment author: sullyj3 27 September 2015 01:47:03PM *  0 points [-]

Unfortunately the only opinions you're gonna get on what should be instituted as a norm are subjective ones. So... Take the average? What if not everyone thinks that's a good idea? Etc, etc, it's basically the same problem as all of ethics.

Drawing that distinction between normative and subjective offensiveness still seems useful.

Comment author: sullyj3 09 September 2015 07:02:23PM 0 points [-]

Just encountered an interesting one:

Eradication of the Parasitoid Wasp is genocide!

Comment author: [deleted] 17 September 2014 03:53:59AM *  2 points [-]

This seems to be the nature of internet groups in general... most people just don't have the ability to commit to something for long term with no external incentives and weak social ties. I've tried numerous colearning, masterminds, etc. and it's always a struggle.

Even my coaching clients who I give twenty bucks to every time they make a meeting often take 2-3 MONTHS before they can start to do this with anything resembling consistetency.

That being said, here's what I've found helps:

  1. Text reminders 15 minutes before every meeting to every person who should show up.
  2. Create carrots and sticks using beeminder, habitrpg, or lift as a group.
  3. Ping participants through facebook or text one or two times throughout the week with something interesting related to the topic of the group to keep the group top of mind.
  4. Create real life bonds otuside of the "let's talk about what we learned" dynamic.
In response to comment by [deleted] on What are you learning?
Comment author: sullyj3 31 July 2015 06:00:37PM 1 point [-]

Perhaps a solution could be to create stronger social ties; video chat? Could be good for asking each other for help and maybe progress reports for accountability and positive reinforcement.

Comment author: Viliam_Bur 15 September 2014 10:51:25AM 2 points [-]

META (anything other than a specific topic to learn)

Comment author: sullyj3 31 July 2015 05:46:05PM 0 points [-]

As an interested denizen of 2015, It might be cool to make this a regular (say, monthly?) thread, with a tag for the archive.

Comment author: Larks 11 March 2013 06:43:11PM 0 points [-]

My guess is that deduction, along with bayesian updating, are being considered part of our rules of inference, rather than axioms.

Comment author: sullyj3 31 July 2015 04:04:00PM 0 points [-]

Oh, like Achilles and the tortoise. Thanks, this comment clarified things a bit.

Comment author: Eliezer_Yudkowsky 11 March 2013 05:59:38PM 3 points [-]

To amplify on Qiaochu's answer, the part where you promote the Solomonoff prior is Bayesian deduction, a matter of logic - Bayes's Theorem follows from the axioms of probability theory. It doesn't proceed by saying "induction worked, and my priors say that if induction worked it should go on working" - that part is actually implicit in the Solomonoff prior itself, and the rest is pure Bayesian deduction.

Comment author: sullyj3 31 July 2015 03:58:26PM *  0 points [-]

Doesn't this add "the axioms of probability theory" ie "logic works" ie "the universe runs on math" to our list of articles of faith?

Edit: After further reading, it seems like this is entailed by the "Large ordinal" thing. I googled well orderedness, encountered the wikipedia article, and promptly shat a brick.

What sequence of maths do I need to study to get from Calculus I to set theory and what the hell well orderedness means?

Comment author: iarwain1 21 December 2014 12:31:17AM 5 points [-]

Nobody answers - bystander effect. (Not necessarily applicable here since nobody's asking for help, but still.)

Comment author: sullyj3 18 January 2015 01:11:18PM *  3 points [-]

I feel like it would've been even better if no one ended up explaining to Capla.

Comment author: chaosmage 22 November 2014 11:29:07AM *  13 points [-]

I'm actually grateful for having heard about that Basilisk story, because it helped me see Eliezer Yudkowsky is actually human. This may seem stupid, but for quite a while, I idealized him to an unhealthy degree. Now he's still my favorite writer in the history of ever and I trust his judgement way over my own, but I'm able (with some System 2 effort) to disagree with him on specific points.

I can't think I'm entirely alone in this, either. With the plethora of saints and gurus who are about, it does seem evident that human (especially male) psychology has a "mindless follower switch" that just suspends all doubt about the judgement of agents who are beyond some threshold of perceived competence.

Of course such a switch makes a lot of sense from an evolutionary perspective, but it is still a fallible heuristic, and I'm glad to have become aware of it - and the Basilisk helped me get there. So thanks Roko!

Comment author: sullyj3 09 December 2014 12:13:24AM 1 point [-]

What makes you think it's more common in males?

View more: Next