Does Mount Stupid refer to the observation that people tend to talk loudly and confidently about subjects they barely understand (but not about subjects they understand so poorly that they know they must understand it poorly)? In that case, yes, once you stop opining the phenomenon (Mount Stupid) goes away.
Mount Stupid has a very different meaning to me. To me it refers to the idea that "feeling of competence" and "actual competence" are not linearly correlated. You can gain a little in actual competence and gain a LOT in terms of "...
I don't think so, because my understanding of the topic didn't improve -- I just don't want to make a fool out of myself.
I've moved beyond mount stupid on the meta level, the level where I can now tell more accurately whether my understanding of a subject is lousy or OK. On the subject level I'm still stupid, and my reasoning, if I had to write it down, would still make my future self cringe.
The temptation to opine is still there and there is still a mountain of stupid to overcome, and being aware of this is in fact part of the solution. So for me Mount Stupid is still a useful memetic trick.
Macroeconomics. My opinion and understanding used to be based on undergrad courses and a few popular blogs. I understood much more than the "average person" about the economy (so say we all) and therefore believed that I my opinion was worth listening to. My understanding is much better now but I still lack a good understanding of the fundamentals (because textbooks disagree so violently on even the most basic things). If I talk about the economy I phrase almost everything in terms of "Economist Y thinks X leads to Z because of A, B, C.&quo
[a "friendly" AI] is actually unFriendly, as Eliezer uses the term
Absolutely. I used "friendly" AI (with scare quotes) to denote it's not really FAI, but I don't know if there's a better term for it. It's not the same as uFAI because Eliezer's personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).
Game theory. If different groups compete in building a "friendly" AI that respects only their personal extrapolated coherent violation (extrapolated sensible desires) then cooperation is no longer an option because the other teams have become "the enemy". I have a value system that is substantially different from Eliezer's. I don't want a friendly AI that is created in some researcher's personal image (except, of course, if it's created based on my ideals). This means that we have to sabotage each other's work to prevent the other resea...
If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cogn...
Looks great!
I may be alone in this, and I haven't mentioned this before because it's a bit of a delicate subject. I assume we all agree that first impressions matter a great deal, and that appearances play a large role in that. I think that, how to say this, ehm, it would, perhaps, be in the best interest of all of us, if you could use photos that don't make the AI thinkers give off this serial killer vibe.
I second Manfred's suggestion about the use of beliefs expressed as probabilities.
In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should b...
My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we're boned. Especially when we don't know exactly what we're doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the ...
From the topic, in this case "selection effects in estimates of global catastrophic risk". If you casually mention you don't particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn't matter that you don't wish to do anybody harm.
Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.
Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.
Probability: Reading the sequences is a sound investment for G (compared to other activities)
Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.
P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don't think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom's simulation argument is sound.
1 - P(Solipsism) > 0.999; My mind doesn't contain minds that are consistently smarter than I am and can out-think me on every level.
P(Dreaming) < 0.001; We don't dream of meticulously filling out tax forms and doing the dishes.
[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]
I know several people who moved to Asia to work on their internet startup. I know somebody who went to Asia for a few months to rewrite the manuscript of a book. In both cases the change of scenery (for inspiration) and low cost of living made it very compelling. Not quite the same as Big Thinking, but it's close.
When you say "I have really thought about this a considerable amount", I hear "I have diagnosed the problem quite a while ago and it's creating a pit in my stomach but I haven't taken any action yet". I can't give you any points for that.
When you're dealing with a difficult problem and if you're an introspective person it's easy to get stuck in a loop where you keep going through the same sorts of thoughts. You realize you're not making much progress but the problem remains so you feel obligated to think about it some more. You should t...
As far as I can tell you identify two options: 1) continue doing the PhD you don't really enjoy 2) get a job you won't really enjoy.
Surely you have more options!
3) You can just do a PhD in theoretical computer vision at a different university.
4) You can work 2 days a week at a company and do your research at home for the remaining 4 days
5) Become unemployed and focus on your research full time
6) Save some money and then move to Asia, South America or any other place with very low cost of living so you can do a few years of research full time.
7) Join a star...
Thanks for the clarifications.
Honestly, I don't have a clear picture of what exactly you're saying ("qualia supervene upon physical brain states"?) and we would probably have to taboo half the dictionary to make any progress. I get the sense you're on some level confused or uncomfortable with the idea of pure reductionism. The only thing I can say is that what you write about this topic has a lot of surface level similarities with the things people write when they're confused.
Just to clarify, does "irreducible" in (3) also mean that qualia are therefore extra-physical?
I assume that we are all in agreement that rocks do not have qualia and that dead things do not have qualia and that living things may or may not have qualia? Humans: yes. Single cell prokaryotes: nope.
So doesn't that leave us with two options:
1) Evolution went from single cell prokaryotes to Homo Sapiens and somewhere during this period the universe went "plop" and irreducible qualia started appearing in some moderately advanced species.
2) Qua...
My first assumption is that almost everything you post is seen as (at least somewhat) valuable (for almost every post #upvotes > #downvotes), so the net karma you get is mostly based on throughput. More readers, more votes. More votes, more karma.
Second, useful posts do not only take time to write, they take time to read as well. And my guess is that most of us don't like to vote on thoughtful articles before we have read them. So for funny posts we can quickly make the judgement on how to vote, but for longer posts it takes time.
Decision fatigue may al...
All the information you need is already out there, and I have this suspicion you have probably read a good deal of it. You probably know more about being happy than everybody else you know and yet you're not happy. You realize that if you're a smart rational agent you should just be able to figure out what you want to do and then just do it, right?
There is no step (3). So why does it feel more complex than it really is?
What is the kind of response you're really lookin...
Questions about deities must fade away just like any other issue fades away after it's been dissolved.
Compartmentalization is the last refuge for religious beliefs for an educated person. Once compartmentalization is outlawed there is no defense left. The religious beliefs just have to face a confrontation of the rational part of the brain and then the religious beliefs will evaporate.
If somebody has internalized the sequences they must (at least):
Thanks for the additional info and explanation. I have some books about QM on my desk that I really ought to study in depth...
I should mention though that what you state about needing only a single-world is in direct contradiction to what EY asserts: "Whatever the correct theory is, it has to be a many-worlds theory as opposed to a single-world theory or else it has a special relativity violating, non-local, time-asymmetric, non-linear and non-measurepreserving collapse process which magically causes blobs of configuration space to instantly vanish [....
The collapse of the wave function is, as far as I understand it, conjured up because the idea of a single world appeals to human intuition (even though there is no reason to believe the universe is supposed to make intuitive sense). My understanding is that regardless of the interpretation you put behind the quantum measurements you have to calculate as if there are multiple words (i.e. a subatomic particle can interfere with itself) and the collapse of the wave function is something you have to assume on top of that.
8 minute clip of EY talking with Scott Aaronson about Schrödinger's Cat
Yep, the box is supposed to be a completely sealed off environment so that the contents of the box (cat, cyanide, Geiger counter, vial, hammer, radioactive atoms, air for the cat breathe) cannot be affected by the outside world in any way. The box isn't a magical box, simply one that seals really well.
The stuff inside the box isn't special. So the particles can react with each other. The cat can breathe. The cat will die when exposed to the cyanide. The radioactive material can trigger the Geiger counter which triggers the hammer, which breaks the vial which releases the cyanide which causes the cat to die. Normal physics, but in a box.
Schrödinger's cat is a thought experiment. The cat is supposed to be real in the experiment. The experiment is supposed to be seen as silly.
People can reason through the math at the level of particles and logically there should be no reason why the same quantum logic wouldn't apply to larger systems. So if a bunch of particles can be entangled and if on observation (unrelated to consciousness) the wavefunction collapses (and thereby fully determines reality) then the same should be able to happen with a particle and a more complex system, such as a real li...
If you're starting out (read: don't yet know what you're doing) then optimize for not getting injured. If you haven't done any weight lifting then you'll get results even if you start out slowly.
Optimize for likelihood of you not quitting. If you manage to stick to whatever plan you make you can always make adjustments where necessary. Risk of quitting is the #1 bottleneck.
Personally, I think you shouldn't look for supplements until you feel you're reached a ceiling with regular workouts. Starting with a strict diet (measure everything) is a good idea if you're serious about this.
Site looks great!
The first sentence is "Here you'll find scholarly material and popular overviews of intelligence explosion and its consequences." which parses badly for me and it isn't clear whether it's supposed to be a title (what this site is about) or just a single-sentence paragraph. I think leaving it out altogether is best.
I agree with the others that the mouse-chimp-Einstein illustration is unsuitable because it's unlikely to communicate clearly to the target audience. I went through the slides of the "The Challenge of Friendly AI&q...
Welcome to Less wrong!
This may be stating the obvious, but isn't this exactly the reason why there shouldn't be a subroutine that detects "The AI wants to cheat its masters" (or any similar security subroutines)?
The AI has to look out for humanity's interests (CEV) but the manner in which it does so we can safely leave up to the AI. Take for analogy Eliezer's chess computer example. We can't play chess as well as the chess computer (or we could beat Grand Masters of chess ourselves) but we can predict the outcome of the chess game when we play ag...
Sure, unanimous acceptance of the ideas would be worrying sign. Would it be a bad sign if we were 98% in agreement about everything discussed in the sequences? I think that depends on whether you believe that intelligent people when exposed to the same arguments and the same evidence should reach the same conclusion (Aumann's agreement theorem). I think that disagreement is in practice a combination of (a) bad communication (b) misunderstanding of the subject material by one of the parties (c) poor understanding of the philosophy of science (d) emotions/si...
Thanks for the explanation, that helped a lot. I expected you to answer 0.5 in the second scenario, and I thought your model was that total ignorance "contaminated" the model such that something + ignorance = ignorance. Now I see this is not what you meant. Instead it's that something + ignorance = something. And then likewise something + ignorance + ignorance = something according to your model.
The problem with your model is that it clashes with my intuition (I can't find fault with your arguments). I describe one such scenario here.
My intuition...
I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you're no longer dealing with a state of complete ignorance. You're in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you're completely ignorant about its contents (by stipulation) but once you open it that's no longer the case.
...The answer for all three envelopes is, in the case of co
It's purely a formality
I disagree with this bit. It's only purely a formality when you consider a single hypothesis, but when you consider a hypothesis that is comprised of several parts, each of which uses the prior of total ignorance, then the 0.5 prior probability shows up in the real math (that in turn affects the decisions you make).
I describe an example of this here: http://lesswrong.com/r/discussion/lw/73g/take_heed_for_it_is_a_trap/4nl8?context=1#4nl8
If you think that the concept of the universal prior of total ignorance is purely a formality, ...
In your example before we have any information we'd assume P(A) = 0.5 and after we have information about the alphabet and how X is constructed from the alphabet we can just calculate the exact value for P(A|B). So the "update" here just consists of replacing the initial estimate with the correct answer. I think this is also what you're saying so I agree that in situations like these using P(A) = 0.5 as starting point does not affect the final answer (but I'd still start out with a prior of 0.5).
I'll propose a different example. It's a bit contri...
I agree with everything you said (including the grandparent). Some of the examples you named are primarily difficult because of the ugh-field and not because of inferential distance, though.
One of the problems is that it's strictly more difficult to explain something than to understand it. To understand something you can just go through the literature at your own pace, look up everything you're not certain about, and so continue studying until all your questions are answered. When you want to explain something you have to understand it but you also have to...
Finally, on an empirical level, it seems like there are more false n-bit statements than true n-bit statements.
I'm pretty certain this intuition is false. It feels true because it's much harder to come up with a true statement from N bits if you restrict yourself to positive claims about reality. If you get random statements like "the frooble fuzzes violently" they're bound to be false, right? But for every nonsensical or false statement you also get the negation of a nonsensical or false statement. "not( the frooble fuzzes violiently)&qu...
Legend:
S -> statements
P -> propositions
N -> non-propositional statements
T -> true propositions
F -> false propositions
I don't agree with condition S = ~T + T.
Because ~T + T is what you would call the set of (true and false) propositions, and I have readily accepted the existence of statements which are neither true nor false. That's N. So you get S = ~T + T + N = T + F + N = P + N
We can just taboo proposition and statement as proposed by komponisto. If you agree with the way he phrased it in terms of hypothesis then we're also in agreem...
As I see it, statements start with some probability of being true propositions, some probability of being false propositions, and some probability of being neither.
Okay. So "a statement, any statement, is as likely to be true as false (under total ignorance)" would be more accurate. The odds ratio remains the same.
The intuition that statements fail to be true most of the time is wrong, however. Because, trivially, for every statement that is true its negation is false and for every statement that is false its negation is true. (Statements that...
I assume that people in their pre-bayesian days aren't even aware of the existence of the sequences so I don't think they can use that to calculate their estimate. What I meant to get at is that it's easy to be really certain a belief is false if it it's intuitively wrong (but not wrong in reality) and the inferential distance is large. I think it's a general bias that people are disproportionately certain about beliefs at large inferential distances, but I don't think that bias has a name.
(Not to mention that people are really bad at estimating inferential distance in the first place!)
Aspergers and anti-social tendencies are, as far as I can tell, highly correlated with low social status. I agree with you that the test also selects for people who are good at the sciences and engineering. Unfortunately scientists and engineers also have low social status in western society.
First Xachariah suggested I may have misunderstood signaling theory. Then Incorrect said that what I said would be correct assuming LessWrong readers have low status. Then I replied with evidence that I think supports that position. You probably interpreted what I said in a different context.
I think you were too convinced I was wrong in your previous message for this to be true. I think you didn't even consider the possibility that complexity of a statement constitutes evidence and that you had never heard the phrasing before. (Admittedly, I should have used the words "total ignorance", but still)
Your previous post strikes me as a knee-jerk reaction. "Well, that's obviously wrong". Not as an attempt to seriously consider under which circumstances the statement could be true. You also incorrectly claimed I was an ignoramus r...
I chose the wording carefully, because "I want people to cut off my head" is funny, and the more general or more correct phrasing is not. But now that it has been thoroughly dissected...
Anyway, since you asked twice I'm going to change way the first statement is phrased. I don't feel that strongly about it and if you find it grating I'm also happy to change it to any other phrasing of your choosing.
I'm sorry if I contributed to an environment in which ideas are too criticized
I interpret your first post as motivated on a need to voice your dis...
In that case it's clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we're saved by symmetry.
There are a number of different combinations of A and ~A and B and ~B but I think that their conditional "biases" all can...
I think "strategy" is better than "wisdom". I think "wisdom" is associated with cached Truths and signals superiority. This is bad because this will make our audience too hostile. Strategy, on the other hand, is about process, about working towards a goal, and it's already used in literature in the context of improving one's decision making process.
You can get away with saying things like "I want to be strategic about life", meaning that I want to make choices in such a way that I'm unlikely to regret them at a later... (read more)