All of moonlight's Comments + Replies

I love this kind of post which gives a name to a specific behavior and also gives good examples for identifying it. They feel very validating for noticing the same fallacy that annoys me, but which I encounter so infrequently that it's hard to notice any pattern and articulate what feels wrong about it.

As a New User to LessWrong, my calculations show that the post certainly did its job! (n=1 p=0)

Discussed tangible directions for research in agent foundations, which was really useful for helping me find a foothold for what people in this field "actually" work on.

I'm also keen in general of this approach of talking about your plans and progress yearly, I think it would be great if everyone doing important things (research and else) would publish something similar. It helps with perspective building of both the person writing the post itself, but also about how the field has changed as seen through their eyes.

Good reference for decorating in an intentional manner, I really like these kinds of posts discussing various aspects we tend to just do automatically, and bring a "smarter" way to approach them. It made me reconsider the importance of lighting in my room and helped me realize that "oh, yeah, that's actually important, and this is actually a good idea! I'll do that." I hope we can start seeing more posts like this.

A practical exercise which is both fun and helps me think better? Sign me up.

I definitely enjoyed doing thinking physics exercises in my free time, they feel similar to chess in the way that they're a fun activity to do in my free time while also making me feel like I'm spending my time doing something really useful, which is really great to feel.

They also provide a tangible way of seeing your "prediction ability" for your own thinking and planning improve, which is helpful in staying motivated in regard to self-improvement exercises.

I can recommend to anyone on the fence about this to try their hands at a few thinking physics exercises!

Good point.

You've made me realize that I've misrepresented how my intuitive mind processes this. After thinking about it a bit, a better way to write it would be:

Child 1: P(B) = 1/2, P(G) = 1/2
Child 2: P(B) = 1/2, P(G) = 1/2
Combined as unordered set {Child 1, Child 2}

The core distinction seems to be to be if you considered it an unordered set or an ordered one. I'm unsure of any way to represent that in easy to read text format, the form written above is best I've got.

moonlight1-2

when we hear the "I have two children, at least one of whom is a boy" part, we set the probability of two boys to 1/3 because the possibilities {(boy, girl), (girl, boy), (boy, boy)} are a-priori equally likely

 

Why is this the most common assumption? This never made much sense to me whenever I've encountered this problem.

It's much more intuitive to think about the scenario as:

2xB
1xB 1xG
2xG

Rather than:

BB
BG
GB
GG

And to come to an answer of 1/2 instead of 1/3. The question doesn't state anything about the children's gender being related to the order they were born.

2Liron
By your logic, if I ask you a totally separate question "What's the probability that a parent's two kids are both boys", would you answer 1/3? Becuase the correct answer should be 1/4 right? So something about your preferred methodology isn't robust.

Proposed Feature for LessWrong: Live AI Toolbox

A wiki page/post/something, where people can find AI tools and use cases useful for AI Safety (or even generally useful for your life), which gets updated as new tools/models come out.

I feel this is a place where everyone's dropping the ball. As capabilities improve, AI tools become more and more useful, and we could make use of a centralized toolbox for staying up-to-date on what's (probably) best for every job, and to also share very helpful ways to use them.

Example 1: I've been struggling while trying to le... (read more)