Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: siIver 09 September 2017 11:44:11AM 0 points [-]

"Less than a third of students by their own self-appointed worst-case estimate *1."

missing a word here, I think.

Comment author: siIver 09 September 2017 10:46:05AM 0 points [-]

I think your post is spot on.

Comment author: siIver 31 August 2017 10:56:35PM 1 point [-]

re-live. Although I'd rather live the same amount of time from now onward.

Comment author: siIver 20 July 2017 07:41:08PM 4 points [-]

First question: I know you admire Trump's persuasion skills, but what I want to know is why you think he's a good person/president etc.

Answer: [talks about Trump's persuasion skills]

Yeah, okay.

Comment author: siIver 09 July 2017 02:18:59PM 0 points [-]

This is an exceptionally well reasoned article, I'd say. Particular props to the appropriate amount of uncertainty.

Comment author: cousin_it 07 July 2017 08:46:08PM *  4 points [-]

Yeah, that seems to be the biggest flaw in the post. I shouldn't have addressed it to everyone, it's intended mostly for people suffering from "akrasia". I.e. if lone wolf is working for you, ignore the post. If it isn't, notice that and change course.

Comment author: siIver 07 July 2017 08:52:27PM 1 point [-]

Well, if you put it like that I fully agree. Generally, I believe that "if it doesn't work, try something else" isn't followed as often as it should. There's probably a fair number of people who'd benefit from following this article's advice.

Comment author: siIver 07 July 2017 08:41:09PM 0 points [-]

I don't quite know how to make this response more sophisticated than "I don't think this is true". It seems to me that whether classes ore lone-wolf improvement is better is a pretty complex question and the answer is fairly balanced, though overall I'd give the edge to lone-wolf.

Comment author: siIver 04 July 2017 08:23:10PM *  1 point [-]

I don't know what our terminal goals are (more precisely than "positive emotions"). I think it doesn't matter insofar as the answer to "what should we do" is "work on AI alignment" either way. Modulo that, yeah there are some open questions.

On the thesis of suffering requiring higher order cognition in particular, I have to say that sounds incredibly implausible (for I think fairly obvious reasons involving evolution).

Comment author: siIver 15 June 2017 07:30:48PM 1 point [-]

This looks solid.

Can you go into a bit of detail on the level / spectrum of difficulty of the courses you're aiming for, and the background knowledge that'll be expected? I suspect you don't want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who's interested to participate meaningfully.

Comment author: AlexMennen 09 June 2017 05:39:20AM 0 points [-]

Sometimes when explicit reasoning and intuition conflict, intuition turns out to be right, and there is a flaw in the reasoning. There's nothing wrong with using intuition to guide yourself in questioning a conclusion you reached through explicit reasoning. That said, DragonGod did an exceptionally terrible job of this.

Comment author: siIver 10 June 2017 03:01:42PM *  1 point [-]

Yeah, you're of course right. In the back of my mind I realized that the point I was making was flawed even as I was writing it. A much weaker version of the same would have been correct, "you should at least question whether your intuition is wrong." In this case it's just very obvious to me me that there is nothing to be fixed about utilitarianism.

Anyway, yeah, it wasn't a good reply.

View more: Next