All of Lavrov Andrey's Comments + Replies

You might be interested in reading about aspiration adaptation theory: https://www.sciencedirect.com/science/article/abs/pii/S0022249697912050

To me the most appealing part of it is that goals are incomparable and multiple goals can be pursued at the same time without the need for a function that aggregates them and assigns a single value to a combination of goals.

I'm quite late (the post was made 4 years ago), and I'm also new to LessWrong, so it's entirely possible that other, more experienced members, will find flaws in my argument.

That being said, I have a very simple, short and straightforward explanation of why rationalists aren't winning.

Domain-specific knowledge is king.

That's it.

If you are a programmer and your code keeps throwing errors at you, then no matter how many logical fallacies and cognitive biases you can identify and name, posting your code on stackoverflow is going to provide orders of magnitude... (read more)

I'm very new to Less Wrong in general, and to Eliezer's writing in particular, so I have a newbie question.

any more than you've ever argued that "we have to take AGI risk seriously even if there's only a tiny chance of it" or similar crazy things that other people hallucinate you arguing.

just like how people who helpfully try to defend MIRI by saying "Well, but even if there's a tiny chance..." are not thereby making their epistemic sins into mine.

I've read AGI Ruin: A List of Lethalities, and I legitimately have no idea what is wrong with "we have to take... (read more)

List of Lethalities isn't telling you "There's a small chance of this."  It's saying, "This will kill us.  We're all walking dead.  I'm sorry."