Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Bound_up 10 March 2017 02:26:16PM *  5 points [-]

I had an idea to increase average people’s rationality with 4 qualities:

It doesn’t seem/feel “rationalist” or “nerdy.”

It can work without people understanding why it works

It can be taught without understanding its purpose

It can be perceived as about politeness

A high school class, where people try to pass Intellectual Turing Tests. It's not POLITE/sophisticated to assert opinions if you can't show that you understand the people that you're saying are wrong.

We already have a lot of error-detection abilities when our minds criticize others' ideas, we just need to access that power for our own ideas.

Comment author: itaibn0 17 March 2017 10:01:30PM 0 points [-]

I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.

An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.

Comment author: gworley 04 January 2017 01:57:12AM 0 points [-]

Oh, it's surprising to me that you feel that way because that's where the dialectic starts; everything up to that was a sort of introduction and apology to give context before stating the thesis as way of setting up for the dialectic.

What didn't you like about the later parts of the piece?

Comment author: itaibn0 04 January 2017 01:20:56PM *  0 points [-]

You start the discussion with a very practical frame: "Here is some advice I intend to give you.". You give caveats, then you give the advice, and you give some justification. The advice sounds plausible. Then you continue to a very philosophical discussion on what fear is and what people think about it that does not appear to tie in with the practical frame. While your article would appear very lopsided with so much caveat and so little content, I don't see how the later parts help. Alternately, you can remove everything up to the 10th paragraph and write a very different sort of essay.

Comment author: Kindly 22 December 2016 02:33:57AM 7 points [-]

Extremely unlikely that people exist that aren't weirded out by Solstices in general but one song lyric is the straw that breaks the camel's back.

Comment author: itaibn0 04 January 2017 01:09:14AM 1 point [-]

"Straw that breaks the camel's back" implies the existence of a large pre-existing weight, so your claim is a tautology.

Comment author: itaibn0 04 January 2017 12:03:55AM 2 points [-]

You point out a problem: There's no way to tell which organizations are making progress on AI alignment, and there is little diversity in current approaches. You turn this into the question: How do we create prizes that incentivize progress in AI alignment? You're missing a step or two here.

I'd say the logic goes the opposite direction: because there are no clear objectively measurable targets that will improve AI safety, prizes are probably a bad idea for increasing the diversity and effectiveness of AI safety research.

Comment author: itaibn0 03 January 2017 11:26:14PM 1 point [-]

Writing suggestion: Drop everything past the 10th paragraph ("It’s not immediately obvious that you’d want to overcome fear, though...").

[Link] Why I Am Changing My Mind About AI Risk

4 itaibn0 03 January 2017 10:57PM
Comment author: ChristianKl 05 December 2016 12:41:39PM *  3 points [-]

It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded.

I don't buy that account of the history as being complete. Many people in the rationality community have contact with other communities that also have a higher prevalence of polyamory. The vegan community also has a higher share of polygamous people.

Comment author: itaibn0 06 December 2016 12:59:21AM 0 points [-]

Perhaps I should not have used such sensationalist language. I admit I don't know the whole story, and that more details are likely to find many nonrational reasons the change occurs. Still, I suspect rational persuasion did play a role, if not a complete one. Anecdotally, the Less Wrong discussion changed my opinion of polyamory from "haven't really thought about it that much" to "sounds plausible but I haven't tried it".

In any case, if your memory of that section of Less Wrong history contributes positively to your nostalgia, it's worth reconsidering the chance events like that will ever happen again.

Comment author: itaibn0 04 December 2016 10:53:08PM 0 points [-]

Given the communities initial heavy interest in the heuristic & biases research, I am amused that there is no explicit mention of the sunk cost policy. Seriously, watch out for that.

My opinion is that revitalizing the community is very likely to fail, and I am neutral on whether it's worth to try anyways by current prominent rationalists. A lot of people are suggesting to restore the website with a more centralized structure. It should be obvious the result won't work the same as the old Less Wrong.

Finally, a reminder on Less Wrong history, which suggests that we lost more than a group of high-quality posters: Less Wrong wasn't always a polyamory hub. It became that way because there was a group of people who seriously believed they could improve the way they think, a few noticed they didn't have any good reason to be monogamous, set out to convince the others, and succeeded. Do you think a change of that scale will ever happen in the future of the rationalist community?

Comment author: NancyLebovitz 07 October 2015 02:19:10PM *  1 point [-]

Argument against: back when cities were more flamable, people didn't set them on fire for the hell of it.

On the other hand, it's a lot easier to use a timer and survive these days, should you happen to not be suicidal.

"I want to see the world burn" is a great line of dialogue, but I'm not convinced it's a real human motivation. Um, except that when I was a kid, I remember wishing that this world was a dream, and I'd wake up. Does that count?

Second thought-- when I was a kid, I didn't have a method in mind. What if I do serious work with lucid dreaming techniques when I'm awake? I don't think the odds of waking up into being a greater intelligence are terribly good, nor is there a guarantee that my live would be better. On the other hand, would you hallucinations be interested in begging me to not try it?

Comment author: itaibn0 07 October 2015 11:08:55PM -1 points [-]

Based on personal experience, if you're dreaming I don't recommend trying to wake yourself up. Instead, enjoy your dream until you're ready to wake up naturally. That way you'll have far better sleep.

Comment author: Elo 28 July 2015 01:17:53AM *  0 points [-]

"assume unbelievable X".

Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the argument in point.

Invalidating someone else's question (by attacking it or trying to defeat the purpose of the question) for reasons of them not being able to ask the right question or you wanting to answer a different question - is not a reasonable way to win a discussion. I am really not sure how to be more clear about it. Discussions are not about winning. one doesn't need to kill a question to beat it; one needs to fill it's idea-space with juicy information-y goodness to satisfy it.

Yes it is possible to resolve a question by cutting it up; {real world example - someone asks you for help. You could defeat the question by figuring out how to stop them from asking for help, or by finding out why they want help and making sure they don't in the future, or can help themselves. Or you could actually help them.}

Or you could actually respond in a way that helps. There is an argument about giving a man a fish or teaching him to fish; but that's not applicable because you have to first assume people asking about fishing for sharks already know how to fish for normal fish. Give them the answers - the shark meat, then if that doesn't help - teach them how to fish for sharks! Don't tell them they don't know how to fish for normal fish then try to teach them to fish for normal fish, suggesting they can just eat normal fish.

Assuming there isn't something wrong with the question I originally ask and how I present it.

More importantly - this is a different (sometimes related) problem that can be answered in a different question at a different time if that's what I asked about. AND one I will ask later, but of myself. One irrelevant to the main question.

Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?

conversation and discussion isn't about what you want. It's what each of us wants.

Yes this is true, but as the entity who started a thread (of conversation generally) I should have more say about it's purpose and what is wanted from it. Of course you can choose to not engage, you can derail a thread, and this is not something that you should do. I am trying to outline that the way you chose to engage was not productive (short of accidentally providing the example of failing to answer the question).

The original question again -

Do you have suggestions for either:

a. dealing with it

b. getting people to answer the right question

Comment author: itaibn0 28 July 2015 06:51:57PM 3 points [-]

"assume unbelievable X".

Only this is not an unbelievable X, its an entirely believable X (I wouldn't have any reason to ask an >unbelieveable - as would anyone asking a question - unless they are actually trying to trick you with a >question). In fact - assuming that people are asking you to believe an "unbelievable X" is a strawman of the >argument in point.

Are you sure that's how you want to defend your question? If you defend the question by saying that the premise is believable, you are implicitly endorsing the standard that questions should only be answered if they are reasonable. However, accepting this standard runs the risk that your conversational partner will judge your question to be unreasonable even if it isn't and fail to answer your question, in exactly the way you're complaining about. A better standard for the purpose of getting people to answer the questions you ask literally is that people should answer the questions that you ask literally even if they rely on fantastic premises.

Can you do me a favour and try to steelman the question I asked? And see what the results are, and what answer you might give to it?

A similar concern is applicable here: Recall that steelmanning means, when encountering a argument that seems easily flawed, not to respond to that argument but to strengthen it ways the seem reasonable to you and answer that instead. The sounds like the exact opposite of what you want people to do to your questions.

View more: Next