Wiki Contributions

Comments

Zed12y40

I think "strategy" is better than "wisdom". I think "wisdom" is associated with cached Truths and signals superiority. This is bad because this will make our audience too hostile. Strategy, on the other hand, is about process, about working towards a goal, and it's already used in literature in the context of improving one's decision making process.

You can get away with saying things like "I want to be strategic about life", meaning that I want to make choices in such a way that I'm unlikely to regret them at a later stage. Or I can say "I want to become a more strategic thinker" and it's immediately obvious that I care about reaching goals and that I'm not talking about strategy for the sake of strategy (I happen to care about strategy because of the virtue of curiosity, but this too is fine). The list goes on: "we need to reconsider our strategy for education", "we're not being strategic enough about health care -- too many people die unnecessarily". None of these statements put our audience on guard or make us look like unnatural weirdos. [1]

The most important thing is that "irrational" is perceived as an insult and way too close to the sexist emotional/hormonal used to dismiss women. Aside from the sexism saying "whatever, you're just being irrational" is just as bad as saying "whatever, you're just being hormonal". It's the worst possible thing to say, and when you have a habit of using the word "rational" a lot it's way too easy to slip up.

[1] fun exercise - substitute "strategy" by "rationality" and see how much more Spock-like it all sounds.

Zed12y00

That comic is my source too. I just never considered taking it at face value (too many apparent contradictions). My bad for mind projection.

Zed12y20

Does Mount Stupid refer to the observation that people tend to talk loudly and confidently about subjects they barely understand (but not about subjects they understand so poorly that they know they must understand it poorly)? In that case, yes, once you stop opining the phenomenon (Mount Stupid) goes away.

Mount Stupid has a very different meaning to me. To me it refers to the idea that "feeling of competence" and "actual competence" are not linearly correlated. You can gain a little in actual competence and gain a LOT in terms of "feeling of competence". This is when you're on Mount Stupid. Then, as you learn more your feeling of competence and actual competence sort of converge.

The picture that takes "Willingness to opine" on the Y-axis is, in my opinion, a funny observation of the phenomenon that people who learn a little bit about a subject become really vocal about it. It's just a funny way to visualize the real insight (Δ feeling of competence != Δ competence) in a way that connects to people because we can probably all remember when we made that specific mistake (talking confidently about a subject we knew little about).

Zed12y00

I don't think so, because my understanding of the topic didn't improve -- I just don't want to make a fool out of myself.

I've moved beyond mount stupid on the meta level, the level where I can now tell more accurately whether my understanding of a subject is lousy or OK. On the subject level I'm still stupid, and my reasoning, if I had to write it down, would still make my future self cringe.

The temptation to opine is still there and there is still a mountain of stupid to overcome, and being aware of this is in fact part of the solution. So for me Mount Stupid is still a useful memetic trick.

Zed12y190
  1. Macroeconomics. My opinion and understanding used to be based on undergrad courses and a few popular blogs. I understood much more than the "average person" about the economy (so say we all) and therefore believed that I my opinion was worth listening to. My understanding is much better now but I still lack a good understanding of the fundamentals (because textbooks disagree so violently on even the most basic things). If I talk about the economy I phrase almost everything in terms of "Economist Y thinks X leads to Z because of A, B, C.". This keeps the different schools of economics from blending together in some incomprehensible mess.

  2. QM. Still on mount stupid, and I know it. I have to bite my tongue not to debate Many Worlds with physics PhDs.

  3. Evolution. Definitely on mount stupid. I know this because I used to think "group pressure" was a good argument until EY persuaded me otherwise. I haven't studied evolution since so I must be on mount stupid still.

Aside from being aware of the concept of Mount Stupid I have not changed my behavior all that much. If I keep studying I know I'm going to get beyond Mount Stupid eventually. The faster I study the less time I spend on top of mount stupid and the less likely I am to make a fool out of myself. So that's my strategy.

I have become much more careful about monitoring my own cognitive processes: am I saying this just to win the argument? Am I looking specifically for arguments that support my position, and if so, am I sure I'm not rationalizing? So in that respect I've improved a little. It's probably the most valuable sort of introspection that typical well educated and intelligent people lack.

One crucial point about Mount Stupid that hasn't been mentioned here yet is that it applies every time you "level up" on a subject. Every time you level up on a subject you're at a new valley with a Mount Stupid you have to cross. You can be an expert frequentist rationalist but a lousy Bayesian rationalist, and by learning a little about Bayesianism you can become stupider (because you're good at distinguishing good vs bad frequentist reasoning but you can't tell the difference for Bayes (and if you don't know you can't tell the difference you're also on Meta Mount Stupid)).

Zed12y20

[a "friendly" AI] is actually unFriendly, as Eliezer uses the term

Absolutely. I used "friendly" AI (with scare quotes) to denote it's not really FAI, but I don't know if there's a better term for it. It's not the same as uFAI because Eliezer's personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).

Zed12y20

Game theory. If different groups compete in building a "friendly" AI that respects only their personal extrapolated coherent violation (extrapolated sensible desires) then cooperation is no longer an option because the other teams have become "the enemy". I have a value system that is substantially different from Eliezer's. I don't want a friendly AI that is created in some researcher's personal image (except, of course, if it's created based on my ideals). This means that we have to sabotage each other's work to prevent the other researchers to get to friendly AI first. This is because the moment somebody reaches "friendly" AI the game is over and all parties except for one lose. And if we get uFAI everybody loses.

That's a real problem though. If different fractions in friendly AI research have to destructively compete with each other, then the probability of unfriendly AI will increase. That's real bad. From a game theory perspective all FAI researchers agree that any version of FAI is preferable to uFAI, and yet they're working towards a future where uFAI is becoming more and more likely! Luckily, if the FAI researchers take the coherent extrapolated violation of all of humanity the problem disappears. All FAI researchers can work to a common goal that will fairly represent all of humanity, not some specific researcher's version of "FAI". It also removes the problem of different morals/values. Some people believe that we should look at total utility, other people believe we should consider only average utility. Some people believe abstract values matter, some people believe consequences of actions matter most. Here too the solution of an AI that looks at a representative set of all human values is the solution that all people can agree on as most "fair". Cooperation beats defection.

If Luke were to attempt to create a LukeFriendlyAI he knows he's defecting from the game theoretical optimal strategy and thereby increasing the probability of a world with uFAI. If Luke is aware of this and chooses to continue on that course anyway then he's just become another uFAI researcher who actively participates in the destruction of the human species (to put it dramatically).

We can't force all AI programmers to focus on the FAI route. We can try to raise the sanity waterline and try to explain to AI researchers that the optimal (game theoretically speaking) strategy is the one we ought to pursue because it's most likely to lead to a fair FAI based on all of our human values. We just have to cooperate, despite differences in beliefs and moral values. CEV is the way to accomplish that because it doesn't privilege the AI researchers who write the code.

Zed12y10

If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cognitively incapacitated.

So as you come into contact with evidence that contradicts what you believe you become less certain your belief is correct, and as you come into contact with evidence that confirms what you believe you become more confident your belief is correct. Apply Bayes' rules for this (for links to Bayes and Bayesian reasoning see other comments in this thread).

I've just read a couple of pages of Defeasible Reasoning by Pollock and it's a pretty interesting formal model of reasoning. Pollock argues, essentially, that Bayesian epistemology is incompatible with deductive reasoning (pg 15). I semi-quote: "[...] if Bayesian epistemology were correct, we could not acquire new justified beliefs by reasoning from previously justified beliefs" (pg 17). I'll read the paper, but this all sounds pretty ludicrous to me.

Zed12y300

Looks great!

I may be alone in this, and I haven't mentioned this before because it's a bit of a delicate subject. I assume we all agree that first impressions matter a great deal, and that appearances play a large role in that. I think that, how to say this, ehm, it would, perhaps, be in the best interest of all of us, if you could use photos that don't make the AI thinkers give off this serial killer vibe.

Zed12y120

I second Manfred's suggestion about the use of beliefs expressed as probabilities.

In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should beliefs defeat each other in this manner? No. Is it rational? No. Does the order in which you're exposed to evidence matter? No.

In puzzle (2) the subject is certain a proposition is true (even though he's still free to change his mind!). However, accepting contradicting evidence leads to confusion (as in puzzle 1), and to mitigate this the construct of "Misleading Evidence" is introduced that defines everything that contradicts the currently held belief as Misleading. This obviously leads to Status Quo Bias of the worst form. The "proof" that comes first automatically defeats all evidence from the future, therefore making sure that no confusion can occur. It even serves as a Universal Counterargument ("If that were true I'd believe it and I don't believe it therefore it can't be true"). This is a pure act of rationalization, not of rationality.

*) meaning that you're not completely confident of T and ~T.

Load More