All of Mahdi Complex's Comments + Replies

This doesn't feel like it's really engaging at all with the content of the post. I don't mention "legitimacy" 10 times for nothing.

It was meant as an April Fool's in the same way the Death with Dignity post was an April Fool's.

trying to make it look like belief in witchcraft is very similar to belief in viruses.

I feel like you're missing the point. Of course, the germ theory of disease is superior to 'witchcraft.' However, in the average person's use of the term 'virus,' the understanding of what is actually going on is almost as shallow as 'witchcraft.' Of course, 'virus' does point towards a much deeper and important scientific understanding of what is going on, but in its every day use, it serves the same role as 'witchcraft.'

The point of the quote is that sometimes, when ... (read more)

2Joachim Bartosik
Yes, I felt that I was missing a point, thank you for pointing to the thing you found interesting in it. Is a thing that makes sense. But I think the quote doesn't point at it very well. First a big chunk of it is asserting that belief in witchcraft theory of disease is similar to belief in germ theory of disease. (I don't know how well average person understands what are viruses) Second where it talks about convincing by making concepts similar it's weird. For example Influenza is much smaller risk than cholera (quick search says CFR for untreated cholera 25-50%, for flu 0.1%) and boiling water is much less costly than slaughtering sheep (it's likely to result in prison time where I live). (EDIT to add: I didn't check those numbers so don't trust them too much, they're just first numbers I could find and they roughly match my expectations) Again thanks for explaining. (at least for me) your comment made the point much better than the quote in the post.
2Capybasilisk
It's also an interesting example of where consequentialist and Kantian ethics would diverge. The consequentialist would argue that it's perfectly reasonable to lie (according to your understanding of reality) if it reduces the numbers of infants dying and suffering. Kant, as far as I understand, would argue that lying is unacceptable, even in such clear-cut circumstances. Perhaps a Kantian would say that the consequentialist is actually increasing suffering by playing along with and encouraging a system of belief they know to be false. They may reduce infant mortality in the near-term, but the culture might feel vindicated in their beliefs and proceed to kill more suspected "witches" to speed up the process of healing children.

I didn't mean to make 1. sound bad. I'm only trying to put my finger on a crux. My impression of most prosaic alignment work seems to be that they have 2. in mind, even though MIRI/Bostrom/LW seem to believe that 1. is actually what we should be aiming towards. Do prosaic alignment people think that work on human 'control' now will lead to scenario 1 in the long run, or do they just reject scenario 1?

2Victor Novikov
I'm not sure I understand the "prosaic alignment" position well enough to answer this. I guess, personally, I can see appeal of scenario 2, of keeping a super-optimizer under control and using it in limited ways to solve specific problems. I also find that scenario incredibly terrifying, because super-optimizers that don't optimize for the full set of human values are dangerous.

I'm just confused about what "optimized for leaving humans in control" could even mean? If a Superintelligence is so much more intelligent than humans that it could find a way, without explicit coercion, for humans to ask it to tile the universe with paper-clips, then "control" seems like a meaningless concept. You would have to force the Superintelligence to treat the human skull, or whatever other boundary of human decision making, as some kind of unviolable and uninfluenceable black box.

3tailcalled
This basically boils down to the alignment problem. We don't know how to specify what we want, but that doesn't mean it is necessarily incoherent. Treating the human skull as "some kind of unviolable and uninfluenceable black box" seems to get you some of the way there, but of course is problematic in its own ways (e.g. you wouldn't want delusional AIs). Still it seems like it points to the path forwards in a way.
2Rafael Harth
I think control is a meaningful concept. You could have AI that doesn't try to alter your terminal goals. Something that just does what you want (not what you ask, since that has well-known failure modes) without trying to persuade you into something else. The difficulty of building such a system is another question, alas.

I'm a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.

It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can't wait for absolutely everyone to agree before raising the alarm.

Eli Tyre*200

One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people's projects over the next 3 years. Then if the world isn't destroyed, the short-timelines people work for the long-timelines people's projects for the following 15 years. Or something.

My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there's a way to get it to work.

For China, the Taliban and the DPRK, I think Fukuyama would probably argue that they don't necessarily disprove his theses, but it's just that it's taking much longer for them to liberalize than he would have anticipated in the 90s (he also never said that any of this was inevitable).

For Mormons in Utah, I don't think they really pose a challenge, since they seem to quite happily exist within the framework of a capitalist liberal democracy.

Technology, and AGI in particular, is indeed the most credible challenge and may force us to reconsider some high-stak... (read more)

Instrumentally, an invisible alpha provides a check on the power of the actual alpha. A king a few centuries ago may have had absolute power, but he still couldn't simply act against what people understood to be the will of the actual alpha (God).

Thank you. I really appreciate this clarification.

I meant God Is Great as a strong endorsement of LessWrong. I am aware that establishing an analogy with religion is often used to discredit ideas and movements, but one of the things I want to push back against is that this move is necessarily discrediting. But this requires a lot of work (historical background on how religions got to occupy the place they do today within culture, classical liberal political philosophy...) on my part to explain why I think so, and why in the case of EA/LW, I think the compa... (read more)

6DirectedEvolution
Framing, research, and communication are all skills that take practice! I hope you'll ultimately find this a helpful space to build your skills :)

No, it's just that we've rejected the concept of "God" as wrong, i.e. not in accordance with reality. Some ancient questions really are solved, and this is one of them. Calling reality "God" doesn't make it God, any more than calling a dog's tail a leg makes it a leg. The dog won't start walking on it.

The disagreement here seems to be purely over definitions? The way I use "God" to mean "reality" is the same way Scott Alexander uses "Moloch" to mean "coordination failure" or how both Yudkowsky and Scott Alexander have used "God" to mean "evolution."

Th

... (read more)
Dagon130

The disagreement here seems to be purely over definitions? The way I use "God" to mean "reality" is the same way Scott Alexander uses "Moloch" to mean "coordination failure" or how both Yudkowsky and Scott Alexander have used "God" to mean "evolution."

Umm, ok?  Using misleading terms and then complaining that you don't generate good discussion seems unlikely to succeed here.

I don't know, but a straightforward propositional post (using more standard terms, or using a lot of non-poetic words to define rather than describe your points) might get some goo... (read more)

Thank you, I find this comment quite constructive.

My understanding of neuroscience has convinced me that consciousness is fundamentally dependent on the brain.

I had a similar journey.

The Durkheimian "society worshiping itself" phenomenon is real, common, and by no means limited to religion as traditionally defined. It is often wildly irrational and is pretty much the opposite of what LW aspires to.

I guess this can take a pretty nasty and irrational form, but I see this continuous with other benign community bonding rituals and pro-social behavior (like Petrov day or the solstice).

3Jay
I should mention that, like many people who were raised religious and lost their faith, I miss it.  It was comforting to believe that the world was in good hands and that it all could work out in the end.  I had friends at church.  Many of them were attractive females. Losing my religion felt less like an act of will and more like figuring out the answer to a math problem.  It wasn't something I wanted, rather the opposite.  I fought it for a while, but there's no cure for enlightenment.  I've tried to go back to church, but it just doesn't work when you don't believe in it.  I no longer see God there, just some schmuck wearing felt.
1Jay
I agree, I just think that community bonding rituals have such a strong tendency to lead to ingroup-vs-outgroup conflicts that I am much more skeptical of the whole idea than you seem to be.   Part of this is my perception that generally neither group is entirely right about every issue, and therefore no group I pick will have my wholehearted support.  This is acceptable; compromise on less crucial matters is often the price of working toward your most important goals.  Having said that, I think it's important to remember what your important goals are and to periodically ask yourself whether the gains are still worth the compromises.  Durkheimian worship is rather directly contrary to this sort of cost-benefit analysis. Or it could just be that I'm Aspergian, and my normal modes of thinking are highly anti-correlated with religion. 

my impression was that in your post the payload is missing

Okay, that seems fair. It is true that just from that post, it's unclear what my point is (see hypothesis 1).

I think it matters how we construst our mythical analogies, and in Scott Alexander's Moloch, he argues that we should "kill God" and replace it with Elua, the god of human values. I think this is the wrong way to frame things. I assume that Scott uses 'God' to refer to the blind idiot god of evolution. But that's a very uncharitable and in my opinion unproductive way of constructing our my... (read more)

9Vladimir_Nesov
It doesn't matter if a discussion is sympathetic or not, that's not relevant to the problem I'm pointing out. Theism is not even an outgroup, it's too alien and far away to play that role. Anti-epistemology is not a label for bad reasoning or disapproval of particular cultures, it's the specific phenomenon of memes and norms that promote systematically incorrect reasoning, where certain factual questions end up getting resolved to false answers, resisting argument or natural intellectual exploration, certain topics or claims can't be discussed or thought about, and meaningless nothings hog all attention. It is the concept for the vectors of irrationality, the foundation of its staying power.

No, he is making a different and more precise claim.

There is a phenomenon that can be called "anti-epistemology." This is a set of social forces that penalize or otherwise impede clear thought and speech.

Sometimes, a certain topic in a certain space is free of anti-epistemology. It is relatively easy to think about, research, and discuss it clearly. A central example would be the subject of linear algebra in the context of a class on linear algebra.

Other times, anti-epistemology makes thought, research, and discussion difficult for a certain topic in a cer... (read more)

'Invisible alpha' seems like a big step up over actual alpha on the ladder of cultural evolution.

In the end, reality itself has always been the ultimate arbiter of any claim to truth or authority.

2Richard_Kennaway
Epistemically, invisible alpha is a retreat from the observed absence of visible alpha, saving the hypothesis by redefining it in just the ways required to evade the evidence. This is the big step up.

You could think of the aim of this post as trying to steelman theism to a rationalist, while simultaneously steelmanning EA/rationalism/... to a theist.

Why “God?” Part of this exercise is to examine how people have used the word “God” throughout history, look at what purpose the concept has served, and, for example, observe how similar the ‘God’ concept is to the rationalist ‘reality’ concept. Arguably, the way people used ‘God’ a thousand years ago is closer to our “reality” concept than to the way many people use ‘God’ today. It is interesting to see what happens when you decide to take certain compatibilist definitions of God seriously.

5Viliam
An invisible alpha male who commands your tribe what to eat, when to have sex, and whom to kill. Later: creator and manager of reality. Much later: reality itself... but also, in some mysterious way, all of the above.
5Gunnar_Zarncke
I read it like that. It could be more clear in that though. The way it is written right now pattern matches easily with religion. This is partly due to the quotes I think, though the quotes are also what facilities the bridge between theists and non-theists. I think it is useful to have words and concepts that overlap in the domains.