Posts

Sorted by New

Wiki Contributions

Comments

Fun Fact: the vast majority of those 3^^^3 people would have to be duplicates of each other because that many unique people could not possibly exist.

A triad I just thought of today which seems definitely true:

Doesn't care to make small mistakes that might make him/her look somewhat silly / Cares to avoid every level of mistake to acquire the cleanest reputation possible / Doesn't care to make small mistakes that might make him/her look somewhat silly.

I think it’s more along the lines of: people in the third stage have acquired and digested all the low-hanging and medium-hanging fruit that those in the second stage are struggling to acquire, that advancing further is now really hard. So they now seek sex and money/power partly because acquiring those will (in the long run) help them further advance in the areas that they have currently put on hold. And partly because of course it’s also nice to have them.

But, the hard part comes after you conquer the world. What kind of world are you thinking of creating?

Johan Liebert, Monster

Rationalists are still human, and we still have basic human needs.

I see this happen quite a bit: people (even other rationalists, apparently) seem to believe that rationalists should be advanced enough to not have basic psychological needs of this sort. This is a complete non sequitur. How does having accurate beliefs, good decision-making skills, and the rationalist attitude inhibit those basic needs? And why those needs and not even more basic needs, like comfort?

Michael Vassar has this quote on Twitter: “Every distinction wants to become the distinction between good and evil.” Which I’m sure I would have understood differently had I not previously read the post from which it (I believe) originated:

Math/Logical style analysis seems like the original of the far-mode paradigm. Fiddling with things with your hands without explicit executive scrutiny over what you are doing while trusting in non-conscious cognitive processes to figure out a solution seems like the paradigm for near-mode thought. Both have an important place, but it seems to me that placing math in near mode is simply an attempt to place everything that works, or that you have affectively labeled as good, in near mode. Every distinction wants to become good versus evil.

I like to think of this as being extreme artificiality. Humans have always attempted to either ignore or go against certain natural elements in order to flourish. It was never this fundamental, though. Logic has, at best, managed to straighten us out and make things better for us. And at worst, it reaches conclusions that are of no practical consequence. If it ever told us that killing babies is good, we would of course have to check all the consequences of what it would mean to ignore this logic. If we get lucky, it’s a logic that doesn’t really extend very far, and does not manifest much consequences, making it okay for us to exercise our extreme artificiality from this logic. If we don’t get lucky, it’s a logic that branches out into many/severely negative consequences if not carried out (worse than killing babies), and then, by looking at this logic, we would have to kill babies.