I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.
Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't f...
tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.
I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.
I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.
Then you don't talk about the same thing as Kaj Sotala. He talks about all the cases where it seems to you that you are deeply motivated, but the goal turns out to be, or gets turned into nothing beyond strategic self-deception. Your point may be valid, but it is about something else than what his post is about.
This is either a very obvious rationalization, or you don't understand Kaj Sotalas point, or both.
The problem Kaj Sotala described is that people have lots of goals, and important ones too, simply as a strategic feature, and they are not deeply motivated to do something about them. This means that most of us who came together here because we think the world could really be better will with all likelihood not achieve much because we're not deeply motivated to do something about the big problems. Do you really think there's no problem at hand? Then that would mean you don't really care about the big problems.
There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.
EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very differe...
I think that you shouldn't keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.
(For some reason negotiation in situations of extreme power imbalance seems like it should have a different name, and I don't know what that should be.)
Dominance or Authority spring to mind. In this video Steven Pinker argues that there are three basic relationship types, authority, reciprocity and communality, and negotiation in extreme power imbalance sounds like it uses the social rules for authority rather than reciprocity.
Thank you for the link!
I think would fit well into the introduction. You (or rather Luke Grecki) could just split the "spacing effect" link into two.
This seems to be a useful technique, thanks for introducing it.
I have a bit of criticism concerining the article: It needs more introduction. Specifically, I would guess I'm not the only one who doesn't know what SR is in the first place; a few sentences of explanation would surely help.
Thank you so much for writing this!
Probably not, but you wouldn't (need to) quote what he wrote here.
EDIT: Or rather, what he's writing since he's here, unless it's still novel to LW.
Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.
Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go "I knew it!" if the AI said this.
I think he meant to write "invent".
As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.