All of Fleisch's Comments + Replies

Fleisch80

As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.

Fleisch100

I think that defocussing a bit and taking the outside view for a second might be clarifying, so let's not talk about what it is exactly that people do.

Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say "there is no problem here," that everything boils down to us just using the wrong definition of motivation (or something). But what's with the charities that can't f... (read more)

2quen_tin
Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don't agree with me, now let's not loose too much time on meta-discussions... I understand your concern about the problems mentioned in the article, and your feeling that I don't address them. You're right, I don't: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.
Fleisch220

tl;dr: Signalling is extremely important to you. Doing away with your ability to signal will leave you helplessly desperate to get it back.

I think that this is a point made not nearly often enough in rationalist circles: Signalling is important to humans, and you are not exempt just because you know that.

Fleisch70

I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point.

Then you don't talk about the same thing as Kaj Sotala. He talks about all the cases where it seems to you that you are deeply motivated, but the goal turns out to be, or gets turned into nothing beyond strategic self-deception. Your point may be valid, but it is about something else than what his post is about.

-3quen_tin
Imagine that in the current discussion, we suddenly realize that we've been writing all that time not to find the truth, but to convince each other (which I think is actually the case). It would be one of those situations where someone like Kaj Sotala would say: "it seems you're deeply motivated in finding the truth, but you're only trying to make people think you have the truth (=convince them)". Then my point would be: unless you're cynical, convincing and finding the truth are exactly the same. If you're cynical, you just think short term and your truth won't last (people will soon realize you were wrong). If you're sincere, you think long term and your truth will last. I would even argue that the only proper definition of truth is: what convinces most people in the long run. Similarly, a proper definition of good (or "important to do") would be: what brings gratitude from most people in the long run.
0quen_tin
I don't make a difference between having a goal and seeking gratitude for that goal, it's exactly the same for me. Something is important if it deserve a lot of gratitude, something is not if it does not. That's all. The "gratitude" part is intrinsic. If you accept my view, Kaj Sotala's statement is a nonsense: it can't turn out to be strategic self-deception when we thought we were deeply motivated, we're seeking gratitude from the start (which is precisely what "being deeply motivated" means. If at one point we discover that we've been looking for gratitude all that time, then we don't discover that we've been fooling ourself, we're only beginning to understand the true nature of any goal.
Fleisch20

This is either a very obvious rationalization, or you don't understand Kaj Sotalas point, or both.

The problem Kaj Sotala described is that people have lots of goals, and important ones too, simply as a strategic feature, and they are not deeply motivated to do something about them. This means that most of us who came together here because we think the world could really be better will with all likelihood not achieve much because we're not deeply motivated to do something about the big problems. Do you really think there's no problem at hand? Then that would mean you don't really care about the big problems.

-3quen_tin
Let me rephrase. The assumption that there would exist pure gratitude-free goals is a myth: pursuing such goals would be absurd. (people who seem do perform gratitude-free actions are often religious people: they actually believe in divine gratitude). Therefore social gratitude is an essential component of any goal and thus it is not correlated with lack of sincere motivation, nor does it "downgrade" the goal to something less important. It's just part of it.
0quen_tin
I deny that having a goal as a "strategic feature" is incompatible with being sincerly and deeply motivated. That's my point. More precisely : either one is consciously seeking gratitude, then he/she is cynical, but I think this is rarely the case. Either seeking gratitude is only one aspect of a goal that is sincerly pursued (which means that one wants to deserve that gatiude for real). Then there is no problem, the motivation is there.
Fleisch240

There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.

EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very differe... (read more)

0[anonymous]
.
8kilobug
"Destroy the world" can mean many things. There aren't nearly enough nuclear weapons to blast Earth itself, the planet will continue to exist, of course. The raw destructive power of nukes may not be enough to kill most of humanity, yes. Targeted on major cities, it'll still kill an enormous amount of people, an overwhelming majority of the targeted country for industrial (ie, urban) countries. But that's forgetting all the "secondary effects" : direct radioactive fallouts, radioactive contamination of rivers and water sources, nuclear winter, ... those are pretty sure to obliterate in the few next years most of the remaining humanity. Maybe not all of us. Maybe a few would survive, in a scorched Earth, without much left of technological civilization. That's pretty much "destroy the world" to me.
Fleisch40

I think that you shouldn't keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.

Fleisch00

(For some reason negotiation in situations of extreme power imbalance seems like it should have a different name, and I don't know what that should be.)

Dominance or Authority spring to mind. In this video Steven Pinker argues that there are three basic relationship types, authority, reciprocity and communality, and negotiation in extreme power imbalance sounds like it uses the social rules for authority rather than reciprocity.

Fleisch00

Thank you for the link!

I think would fit well into the introduction. You (or rather Luke Grecki) could just split the "spacing effect" link into two.

Fleisch00

This seems to be a useful technique, thanks for introducing it.

I have a bit of criticism concerining the article: It needs more introduction. Specifically, I would guess I'm not the only one who doesn't know what SR is in the first place; a few sentences of explanation would surely help.

5Kevin
Previously discussed a fair amount on Less Wrong. I made a wiki article and linked some of the articles/comments. http://wiki.lesswrong.com/wiki/Spaced_repetition
Fleisch00

Thank you so much for writing this!

Fleisch00

Probably not, but you wouldn't (need to) quote what he wrote here.

EDIT: Or rather, what he's writing since he's here, unless it's still novel to LW.

Fleisch390

Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.

0Voltairina
Even more sinister, maybe: suppose it said there's a level of processing on which you automatically interpret things in an intentional frame (ala Dan Dennet) and this ability to "intentionalize" things effectively simulates suffering/minds all the time in everyday objects in your environment, and that further, while we can correct it in our minds, this anthropomorphic projection happens as a necessary product, somehow, of our consciousness. Consciousness as we know it IS suffering and to create an FAI that won't halt the moment it figures out that it is causing harm with its own thought processes, we'll need to think really, really far outside the box.
1DanielLC
I find the idea that they're conscious more likely than the idea that death is inherently bad. I also doubt that they're as conscious as humans (either it isn't discrete, and a human is more, or it is, and a human has more levels of consciousness), and that their emotions are what they appear to be.
Armok_GoB120

Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go "I knew it!" if the AI said this.

5RobinZ
Upvoted for reminding me of 1/0 (read through 860).
0Kaj_Sotala
Changed it to "develop". (I think "invent" was what I originally meant, but "develop" makes more sense.)