All of Zed's Comments + Replies

I think "strategy" is better than "wisdom". I think "wisdom" is associated with cached Truths and signals superiority. This is bad because this will make our audience too hostile. Strategy, on the other hand, is about process, about working towards a goal, and it's already used in literature in the context of improving one's decision making process.

You can get away with saying things like "I want to be strategic about life", meaning that I want to make choices in such a way that I'm unlikely to regret them at a later... (read more)

That comic is my source too. I just never considered taking it at face value (too many apparent contradictions). My bad for mind projection.

Does Mount Stupid refer to the observation that people tend to talk loudly and confidently about subjects they barely understand (but not about subjects they understand so poorly that they know they must understand it poorly)? In that case, yes, once you stop opining the phenomenon (Mount Stupid) goes away.

Mount Stupid has a very different meaning to me. To me it refers to the idea that "feeling of competence" and "actual competence" are not linearly correlated. You can gain a little in actual competence and gain a LOT in terms of "... (read more)

1dlthomas
I understood it to come from here, but if there's another source or we wish to adopt a different usage I'm fine with that. Actual vs. perceived competence is probably a more useful comparison.

I don't think so, because my understanding of the topic didn't improve -- I just don't want to make a fool out of myself.

I've moved beyond mount stupid on the meta level, the level where I can now tell more accurately whether my understanding of a subject is lousy or OK. On the subject level I'm still stupid, and my reasoning, if I had to write it down, would still make my future self cringe.

The temptation to opine is still there and there is still a mountain of stupid to overcome, and being aware of this is in fact part of the solution. So for me Mount Stupid is still a useful memetic trick.

0dlthomas
Maybe you leveled the mountain? :-P Being "on" the mountain while not being willing to opine just seems like a strange use of words.
  1. Macroeconomics. My opinion and understanding used to be based on undergrad courses and a few popular blogs. I understood much more than the "average person" about the economy (so say we all) and therefore believed that I my opinion was worth listening to. My understanding is much better now but I still lack a good understanding of the fundamentals (because textbooks disagree so violently on even the most basic things). If I talk about the economy I phrase almost everything in terms of "Economist Y thinks X leads to Z because of A, B, C.&quo

... (read more)
6dlthomas
If you're successfully biting your toungue, doesn't that put you off "Mount Stupid", as the y axis is "willingness to opine on topic"?

[a "friendly" AI] is actually unFriendly, as Eliezer uses the term

Absolutely. I used "friendly" AI (with scare quotes) to denote it's not really FAI, but I don't know if there's a better term for it. It's not the same as uFAI because Eliezer's personal utopia is not likely to be valueless by my standards, whereas a generic uFAI is terrible from any human point of view (paperclip universe, etc).

-1TimS
I guess it just doesn't bother me that uFAI includes both indifferent AI and malicious AI. I honestly think that indifferent AI is much more likely than malicious (Clippy is malicious, but awfully unlikely), but that's not good for humanity's future either.

Game theory. If different groups compete in building a "friendly" AI that respects only their personal extrapolated coherent violation (extrapolated sensible desires) then cooperation is no longer an option because the other teams have become "the enemy". I have a value system that is substantially different from Eliezer's. I don't want a friendly AI that is created in some researcher's personal image (except, of course, if it's created based on my ideals). This means that we have to sabotage each other's work to prevent the other resea... (read more)

1Xachariah
Game Theory only helps us if it's impossible to deceive others. If one is able to engage in deception, the dominant strategy becomes to pretend to support CEV FAI while actually working on your own personal God in a jar. AI development in particular seems an especially susceptible domain for deception. The creation of a working AI is a one time event, it's not like most stable games in nature which allow one to detect defections of hundreds of iterations. The creation of a working AI (FAI or uFAI) is so complicated that it's impossible for others to check if any given researcher is defecting or not. Our best hope then is for the AI project to be so big it cannot be controlled by a single entity and definitely not by a single person. If it only takes guy in a basement getting lucky to make an AI go FOOM, we're doomed. If it takes ten thousand researchers collaborating in the biggest group coding project ever, we're probably safe. This is why doing work on CEV is so important. So we can have that piece of the puzzle already built when the rest of AI research catches up and is ready to go FOOM.
1Armok_GoB
This doesn't apply to all of humanity, just to AI researchers good enough to pose a threat.
1TimS
As I understand the terminology, AI that only respects some humans' preferences is uFAI by definition. Thus: is actually unFriendly, as Eliezer uses the term. Thus, the researcher you describe is already an "uFAI researcher" ---------------------------------------- What do you mean by "representative set of all human values"? Is there any reason to that the resulting moral theory would be acceptable to implement on everyone?

If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cogn... (read more)

Looks great!

I may be alone in this, and I haven't mentioned this before because it's a bit of a delicate subject. I assume we all agree that first impressions matter a great deal, and that appearances play a large role in that. I think that, how to say this, ehm, it would, perhaps, be in the best interest of all of us, if you could use photos that don't make the AI thinkers give off this serial killer vibe.

8fubarobfusco
(Responding to an old comment but ...) Wow. Judging by these pictures, and these pictures alone ... Nick Bostrom is serious business. Ben Goertzel is leather with a side of JPEG compression. Robin Hanson is plotting something fiendish. (And the backdrop for Robin's picture says "school pictures day".) Carl Shulman isn't working; he's at a party with girls. David Chalmers is on his yacht, the wind romantically blowing his hair. (And you could be there at his side ...) J. Storrs Hall is a math professor. Anna Salamon is a perpetual student. Eliezer is trying to extrapolate your utility function from a careful examination of your microexpressions.

Here's my $0.02 on that page: Bostrom looks like he suspects you of something. Goetzel looks smug but ok. Hanson looks evil. Shulman looks fine, but getting rid of his red eye would take about 2 minutes in iPhoto. Chalmers looks like a hobo, but not scary. Hall, Salamon, and Yudkowsky look fine.

2Shmi
Yeah, Anna's happy smile and Eliezer's piercing gaze sure make you think about the number of skeletons in their SIAI closet.

I second Manfred's suggestion about the use of beliefs expressed as probabilities.

In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should b... (read more)

0fsopho
Thank you, Zed. You are right: I didn't specified the meaning of 'misleading evidence'. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I'm missing something, but I don't see any silliness in thinking of terms of "belief A defeats belief B". On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I'm drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This new piece of information defeats the justification I had for believing there is a tree in front of me - my evidence does not support this belief anymore. There is a good material on defeasible reasoning and justification in John Pollock's website: http://oscarhome.soc-sci.arizona.edu/ftp/publications.html#reasoning

My view about global rationality is similar to that the view of John Baez about individual risk-adversity. An individual should typically be cautious because the maximum downside (destruction of your brain) is huge even for day-to-day actions like crossing the street. In the same way, we have only one habitable planet and one intelligent species. If we (accidentally) destroy either we're boned. Especially when we don't know exactly what we're doing (as is the case with AI) caution should be the default approach, even if we were completely oblivious to the ... (read more)

From the topic, in this case "selection effects in estimates of global catastrophic risk". If you casually mention you don't particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn't matter that you don't wish to do anybody harm.

-2[anonymous]
I can't control what other people say but I didn't at any point say that I don't care about humans, nor did I say that personally killing anyone is a good idea ever. my main point was that the probabilities of various xRisks don't matter. My side point was that if it turned out that UFAI was a significant risk then politically enforced luddism would be the logical response. I like to make that point once in awhile in the hopes that SingInst will realize the wisdom of it.

Let G be a a grad student with an IQ of 130 and a background in logic/math/computing.

Probability: The quality of life of G will improve substantially as a consequence of reading the sequences.

Probability: Reading the sequences is a sound investment for G (compared to other activities)

Probability: If every person on the planet were trained in rationality (as far as IQ permits) humanity would allocate resources in a sane manner.

6wedrifid
0.3; 0.9; 0.00hahahaha001
3Craig_Heldreth
P(substantial improvement) ~ .2 P(sound investment) ~ .8 P(rationaltopia) ~ .01
5Eneasz
1 & 2: Yes, 80% confidence. However I don't think reading the sequences should be a chore. Start with the daily Seq Reruns and follow them for a week or two. If you don't enjoy it, don't read it. The reason I (and probably most people) read the Sequences was because they were fun to read. 3: "Sane" isn't precise enough to answer. However I would say that the allocation would be more sane than currently practiced with 98% confidence.
3Zetetic
For 1 and 2: I think you need to qualify 'quality of life' a bit. Are you asking if the sequences will make you happier? Resolve some cognitive dissonance? Make you 'win' more (make better decisions)? Even with that sort of clarification, however, it seems difficult to say. For me, I could say that I feel like I've cleared out some epistemological and ethical cobwebs (lingering bad or inconsistent ideas) by having read them. In any event, there are too many confounding variables, and this requires too much intepretation for me to feel comfortable assigning an estimate at this time. For 3: I think I would need to know what it means to "train someone in rationality". Do you mean have them complete a course, or are we instituting a grand design in which every human being on Earth is trained like Brennan?

Ah, you're right. Thanks for the correction.

I edited the post above. I intended P(Solipsism) < 0.001

And now I think a bit more about it I realize the arguments I gave are probably not "my true objections". They are mostly appeals to (my) intuition.

You shouldn't do it because it's an invitation for people to get sidetracked. We try to avoid politics for the same reason.

0[anonymous]
Sidetracked from what?

P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don't think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom's simulation argument is sound.

1 - P(Solipsism) > 0.999; My mind doesn't contain minds that are consistently smarter than I am and can out-think me on every level.

P(Dreaming) < 0.001; We don't dream of meticulously filling out tax forms and doing the dishes.

[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]

1D_Malik
Idea: play a game of chess against someone while in a lucid dream. If you won or lost consistently, it would show that you are better at chess than you are at chess. If anyone actually does this, I think you should alternate games sitting normally and with your opponent's pieces on your side of the board (i.e. the board turned 180 degrees), because I'd expect your internal agents to think better when they're seeing the board as they would in a real chess match.
0a_gramsci
On your argument, there is little need to flawlessly compute the universe. If a civilization sees that their laws are inconsistent with their observations, then they will change their laws to reflect their observations. Because there is no way to conclusively prove your laws are correct, it is impossible for a simulation to state that "Our laws are correct, therefore there is a flaw in the universe". Furthermore, on the probability that our ancestors have obtained the computing power of running a simulation: An estimate for the power of a (non-quantum) planet sized computer is 10^42 (R. J. Bradbury, “Matrioshka Brains.”) operation per second. Its hard to pin down how many atoms there are in the universe, but lets put it at around 10^80, and with 128 bits needed to hold each coordinate, to the degree of one pm, and another for its movement, that puts it at around 10^83 operations to run a simulation. So at first it looks impractical to compute a universe, but this computer need to perform its operations in a seconds time. (Practical value of a computer that runs infinitely slowly), it can compute its values infinitely slowly. And so, no matter the size of the universe, a computer can simulate it. And because it can compute its values infinitely slowly, it can compute an infinite number of universes. So in conclusion, there is a very low probability that a civilization evolves to the point where it can simulate a universe, and the motives are also dubious. But, because of that fact that if it does, there is no upper bound to the number of universes the civilization can simulate, and so we are almost certainly in a simulated universe, because the probability of us being in a simulated universe is determined by n/p, where p is the probability of a universe being simulated, and n is the number of universes being simulated, that ends up being a probability of infinity, and so we are most likely part of a simulated universe.
0[anonymous]
You don't? My dreams suck more than I thought. (I also give P(muflax is dreaming) < 0.001, but because I can't easily manipulate the mindstream right now. I can't rewind time, shift my location or abort completely, so I'm probably awake. I can always do these things in dreams.)
0JoshuaZ
Given your argument, I'm a bit confused by why you assign such a high upper bound to P(Solipsism).
1wedrifid
I've got to agree, things by Knuth are pretty damn irreducible.

I know several people who moved to Asia to work on their internet startup. I know somebody who went to Asia for a few months to rewrite the manuscript of a book. In both cases the change of scenery (for inspiration) and low cost of living made it very compelling. Not quite the same as Big Thinking, but it's close.

I'm flattered, but I'm only occasionally coherent.

0SilasBarta
No, I concur with GabrielDuquette. And by the way, I, um, have a friend in the position you've described in your reply there. What's the general template for, um, him, to get out of that situation?

When you say "I have really thought about this a considerable amount", I hear "I have diagnosed the problem quite a while ago and it's creating a pit in my stomach but I haven't taken any action yet". I can't give you any points for that.

When you're dealing with a difficult problem and if you're an introspective person it's easy to get stuck in a loop where you keep going through the same sorts of thoughts. You realize you're not making much progress but the problem remains so you feel obligated to think about it some more. You should t... (read more)

5[anonymous]
I disagree. I think much of the evidence about the rise of post-docs as principal investigators and the diminishing number of tenured positions is at odds with this claim. This claim is essentially why most students go to a Ph.D. program and they become depressed when they learn it doesn't work like this about 3 years into the process. In the last 5 years, I've taken low-paying math research jobs several summers so that I could live in Paris, Hong Kong, and College Station, TX, just to experience parts of the world I had not been to. I've moved (at great personal expense) 3 times in the past 4 years to get out of life situations that I found unsatisfactory. I think that my thinking-to-action ratio is not bad. You seem to dismiss the possibility that there can be real life Catch-22s. Given my preferences, I think I am in a Catch-22 and I cannot determine an actionable step. Some of my favorite life advice came from a high school math teacher who said "when you don't know what to do, do something." I think I am more insightful than just to wallow in akrasia. Yes, this is exactly what I have been doing for the past 2 years. But when I have discussed the option to switch to other research fields with faculty and older graduate students, they are telling me that the condition (a) is going to be true in every research field where there is actually enough grant money to finance my studentship, and that (a) is just a part of life in science and that I should be more focused on just doing programming tasks and coming up with small software developments that cater to commercial interests, leading to papers that fit into condition (a). I completely reject their point of view; I think they are wrong, and I think that if academia is set up this way, then my options are to leave academia for jobs that I think are very suboptimal or else agree to unhappily suffer through the academic hoop-jumping that I don't like. Given that these are my only options, I am trying to prepare m

As far as I can tell you identify two options: 1) continue doing the PhD you don't really enjoy 2) get a job you won't really enjoy.

Surely you have more options!

3) You can just do a PhD in theoretical computer vision at a different university.

4) You can work 2 days a week at a company and do your research at home for the remaining 4 days

5) Become unemployed and focus on your research full time

6) Save some money and then move to Asia, South America or any other place with very low cost of living so you can do a few years of research full time.

7) Join a star... (read more)

5Daniel_Burfoot
I am fascinated by this idea in principle, but do you know anyone who has actually done it? I fear there are many nonobvious details that would derail the plan. Maybe we should create an LW outpost in Saigon or Bangalore or some other inexpensive place, since there are many people here who are excited about the idea of living inexpensively to free up more time for Big Thinking.
3[anonymous]
"You must concentrate upon and consecrate yourself wholly to each day, as though a fire were raging in your hair." - Taisen Deshimaru
0roland
Great answer! I want to emphasize the following: You could start freelancing(there are sites like http://www.vworker.com) and work as much as you want/need to live comfortably. If you have a lot of knowledge you can make good money consulting. I think you can make enough for a living if you work at most 30 hours per week(this would be 3 days x 10 hours). That gives you another 4 days of free time per week to focus on whatever you want to do.
6[anonymous]
I have already transferred schools once, moving because there were no advisers in my area at school #1 (the one I had planned to work with became emeritus right as I joined). I like the school I am at now a lot more than I like computer vision. In fact, my main issue with my current situation is that it appears that no one can do fundamental research in computer vision: all of the major conferences require you to pander to shorter term commercial applications if you want to publish and I'd rather move to a new field than jump through those hoops. I don't consider options 4, 5, 6, or 7 to be remotely realistic for me. I can't think of an Asian or South American countries to where I would be happy with the governments or the long distance from family and friends if I were to live there semi-permanently. Those considerations are at least as important to me as job considerations. I don't consider unemployment or extreme part-time work an option because I have other life goals, like traveling, home ownership, etc., that I want to financially support in addition to whatever career path I choose. I appreciate your suggestions, but I have really thought about this a considerable amount. The post that I linked above has some more details about what thinking I have already done. I would really appreciate more targeted advice if you are interested. Given the climate for faculty jobs, what is the best way to try to achieve one? What are ways to do theoretical work / teaching at a university level for a living that are non-traditional?
5[anonymous]
.

Thanks for the clarifications.

Honestly, I don't have a clear picture of what exactly you're saying ("qualia supervene upon physical brain states"?) and we would probably have to taboo half the dictionary to make any progress. I get the sense you're on some level confused or uncomfortable with the idea of pure reductionism. The only thing I can say is that what you write about this topic has a lot of surface level similarities with the things people write when they're confused.

Just to clarify, does "irreducible" in (3) also mean that qualia are therefore extra-physical?

I assume that we are all in agreement that rocks do not have qualia and that dead things do not have qualia and that living things may or may not have qualia? Humans: yes. Single cell prokaryotes: nope.

So doesn't that leave us with two options:

1) Evolution went from single cell prokaryotes to Homo Sapiens and somewhere during this period the universe went "plop" and irreducible qualia started appearing in some moderately advanced species.

2) Qua... (read more)

2[anonymous]
Not unless we are arguing over definitions. Tabooing the phrase "extra-physical", what Eliezer and Chalmers were arguing (or trying to argue) about is whether a superintelligent observer, with full knowledge of the physical state of a brain, would have the same level of certainty about the qualia that the brain experiences as it does about the physical configuration of the brain. Actually, if they had phrased the debate in those terms it would have turned out better. I don't think that what they were arguing about was clearly defined by either party, which is why it has been necessary (in my humble opinion) for me to "repair" Eliezer's contribution. So anyway, no it does not mean the same thing. I argue that qualia are not "extra-physical", because the observer does in fact have the same level of knowledge about the qualia as it does about the physical Universe. However, this only proves that qualia supervene upon physical brain states and does not demonstrate that qualia can ever be explained in terms of quarks (rather than "psycho-physical bridging laws" or some such idea). It might be tempting to refer to (a degree of) belief in irreducibility of qualia as "non-physical", but for the purposes of this discussion it would confound things. I don't think that there's a good reason why you didn't describe qualia as "plopping" into existence in scenario 2 as well, or else in neither scenario. Since (with extreme likelihood) qualia supervene upon brain states whether they are irreducible or reducible, the existence of suitable brain states (whatever that condition may be) seems likely to be a continuous rather than discrete quality. "Dimmer" qualia giving way to "brighter" qualia, as it were, as more complex lifeforms evolve. Note the similarity to Eliezer's post on the many worlds hypothesis here.

My first assumption is that almost everything you post is seen as (at least somewhat) valuable (for almost every post #upvotes > #downvotes), so the net karma you get is mostly based on throughput. More readers, more votes. More votes, more karma.

Second, useful posts do not only take time to write, they take time to read as well. And my guess is that most of us don't like to vote on thoughtful articles before we have read them. So for funny posts we can quickly make the judgement on how to vote, but for longer posts it takes time.

Decision fatigue may al... (read more)

3SilasBarta
Also, sometimes an apparently well-researched article turns out to be based on only a superficial understanding of the topic (e.g. only having skimmed the abstracts) and mis-represents the cited material, and this is sometimes revealed on "cross-examination" in the comments.
7AdeleneDawner
This. Also after reading a more complex thing, it seems common that I'll forget to think about voting at all, since I'm distracted by thinking about the implications or who I might want to share it with or what other people have to say about it. Sometimes I remember to go back and vote, but I think most of the time I just don't, whereas with funny things the impulse to focus on the author and give them a reward in response seems to be automatic.

All the information you need is already out there, and I have this suspicion you have probably read a good deal of it. You probably know more about being happy than everybody else you know and yet you're not happy. You realize that if you're a smart rational agent you should just be able to figure out what you want to do and then just do it, right?

  1. figure out what makes you happy
  2. do more of those things
  3. ???
  4. happiness manifests itself

There is no step (3). So why does it feel more complex than it really is?

What is the kind of response you're really lookin... (read more)

1Hamp
I read this comment half a year ago and it was very helpful to me. I'm already in a much better spot now, thank you, 9 years later :) 
9rysade
Ouch. Halfway through that list I started wincing. A lot of what chimera has said resonates with me, and plenty of your observations fit me as well! Chimera, I can say that lots of the advice so far on this topic are things I tried and they worked like charms. I mean 'charm' quite literally. It was like magic.

Questions about deities must fade away just like any other issue fades away after it's been dissolved.

Compartmentalization is the last refuge for religious beliefs for an educated person. Once compartmentalization is outlawed there is no defense left. The religious beliefs just have to face a confrontation of the rational part of the brain and then the religious beliefs will evaporate.

If somebody has internalized the sequences they must (at least):

  1. be adapt at reductionism,
  2. be comfortable with Bayes and MML, Kolmogorov complexity,
  3. be acutely aware of whic
... (read more)

I think that what you're saying is technically correct. However, simplifying the thought experiment by stating that the inside of the box can't interact with the outside world just makes the thought experiment easier to reason about and it has no bearing on the conclusions we can draw either way.

0orthonormal
It's a distinction with a difference: the point is that a closed system means a factorizable wavefunction, not lack of interaction. (The latter is strictly impossible!)

Yikes! Thanks for the warning.

0Mitchell_Porter
I almost added this warning myself, though it would have been with a different emphasis: Such debates about MWI as I have had here, in the past, have often not been a clean discussion of the merits of MWI versus some other interpretation, because I won't shut up about these other issues, which are far more interesting and important. There are severe problems awaiting anyone who wants to explain consciousness in terms of interactions between distributed, coarse-grained physical states; there is an interesting possibility that it could instead be explained in terms of a single, microphysically exact entangled state; that is my preoccupation. The debate over MWI is just a sideshow. MWI looks bad from my ontological perspective, because I say we should take the apparent ontology of the self more seriously, as its actual ontology, whereas MWI extends the dismissal of conscious appearances further. But MWI also looks bad from a pure physics perspective, which just wants an exact mathematical description of the world that works, and cares nothing about its relationship to the "subjective world" of "lived experience". The most shocking feature of MWI, once I really understood it, is that it cannot by itself make any correct predictions at all, because the entire predictive content of QM comes from the Born rule (or projection postulate), and no derivation of the Born rule within MWI exists. You often hear people saying "all the interpretations of QM make the same predictions", but this is not true for MWI. You could say it makes no predictions (since it has no substitute for the Born rule), or that it makes wrong predictions (if you just count the worlds naively), but the only version of MWI which makes the same predictions as QM is the, so far imaginary, version which contains a derivation of the Born probabilities. It's almost comical, how new problems for MWI keep appearing, the more I discuss it with people. For example, the standard lay understanding of MWI is that

Thanks for the additional info and explanation. I have some books about QM on my desk that I really ought to study in depth...

I should mention though that what you state about needing only a single-world is in direct contradiction to what EY asserts: "Whatever the correct theory is, it has to be a many-worlds theory as opposed to a single-world theory or else it has a special relativity violating, non-local, time-asymmetric, non-linear and non-measurepreserving collapse process which magically causes blobs of configuration space to instantly vanish [.... (read more)

5wedrifid
I agree that such an exchange would be useful. Unfortunately it would be hard to have with Mitchell_Porter because of the reputation he has gained for his evangelism of qualia and Quantum Monadology. People who have sufficient knowledge and interest in physics to be useful in such an exchange are less likely to become significantly involved if they think they are just arguing with a crackpot (again).
4Mitchell_Porter
I have posted here, on this topic (MWI), perhaps a hundred times. There are many comments from me in the Quantum Physics sequence. Two years ago I made a top-level post in favor of the rather anodyne position that MWI is not the favored interpretation, it's just one among many. Now I would take a much stronger line, that MWI has very little going for it. It cannot even reproduce the predictions of QM, which derive from the "Born rule" that MWI discards, in favor of having only the Schrodinger equation. Instead, the ideological stance is adopted that Only The Wavefunction Exists, and the recovery of the Born probabilities, which contain the whole of QM's empirical content, is left for future research. Or, even worse, it's just assumed. But this is a problem because, if you count the branches of the wavefunction, they should all count for the same, which would mean that the probabilities of all outcomes are equal, which would mean that MWI is falsified. Robin Hanson dreamed up an idea for how to get the right multiplicities of worlds, but it means that the individual worlds are somewhat messy superpositions. There are various other claims in the physics and philosophy literature of having recovered the Born rule, none of them satisfactory. One should be aware, especially in the era of arxiv.org - which is not peer-reviewed - that bad papers are available in abundance; though in this area, even good physicists produce bad papers advancing bogus arguments. In the quotation above, Eliezer is once again assuming that wavefunctions exist and that the only alternative to MWI is wavefunction collapse. "Blobs of configuration space" don't "vanish" if they were only ever domains in a probability distribution; see my remarks elsewhere on this page on the necessity of understanding that wavefunctions need not exist. I have made these points in the past ( 1 2 3 ). Let me unearth a few other discussions for you... Counterfactual measurement. A supposed derivation of the Born rul

The collapse of the wave function is, as far as I understand it, conjured up because the idea of a single world appeals to human intuition (even though there is no reason to believe the universe is supposed to make intuitive sense). My understanding is that regardless of the interpretation you put behind the quantum measurements you have to calculate as if there are multiple words (i.e. a subatomic particle can interfere with itself) and the collapse of the wave function is something you have to assume on top of that.

8 minute clip of EY talking with Scott Aaronson about Schrödinger's Cat

1Mitchell_Porter
You have to do this in any probabilistic calculation, especially when you have chains of dependent probabilities. The mere fact that, e.g., the behavior of a ball bouncing around on a roulette wheel can be understood in terms of branching possible worlds, is not usually interpreted as implying that those possible worlds actually exist, or that they interact with this one. The peculiarity of quantum probability is that you can get cancellation of probability amplitudes (the complex numbers at the step just before probabilities are computed). Thus in the double slit experiment, if you try to analyze what happens in a way analogous to Galton's Quincunx, you end up saying that particles don't arrive in the dark areas, because the possible paths 'cancel' at the amplitude level. This certainly makes no sense for probabilities, which are always nonnegative and so their sum is monotonically increasing - adding a possible path to an outcome can never decrease the overall probability of that outcome occurring. Except in quantum mechanics; but that just means that we are using the wrong concepts to understand it, not that there is such a thing as a negative probability. However, it is not as if we know that the only way to get quantum probabilities is by supposing the existence and interaction of parallel worlds in the multiverse, and in fact all the attempts to make that idea work in detail end up in a conceptual shambles (see: measure problem, relativity problem, preferred basis problem). We don't need a multiverse explanation; we just need a single-world explanation that gives rise to the same probability distributions that are presently obtained from wavefunctions. The Nobel laureate Gerard 't Hooft has some ideas in this direction which deserve to be much better known; they are at least as important as anything in the "famous" interpretations associated with Bohm, Everett, and Cramer.

Yep, the box is supposed to be a completely sealed off environment so that the contents of the box (cat, cyanide, Geiger counter, vial, hammer, radioactive atoms, air for the cat breathe) cannot be affected by the outside world in any way. The box isn't a magical box, simply one that seals really well.

The stuff inside the box isn't special. So the particles can react with each other. The cat can breathe. The cat will die when exposed to the cyanide. The radioactive material can trigger the Geiger counter which triggers the hammer, which breaks the vial which releases the cyanide which causes the cat to die. Normal physics, but in a box.

3orthonormal
Clarification: the outside world does interact with the inside, but not in any way that depends on whether the cat is alive or dead. (If the contents of the box are positively charged electrically, they can continue to exert a force on objects outside. But if the cat is positively charged†, then the box needs to shield its influence on the electromagnetic field so that you can't tell from outside if it's moving or not.) † That is, if it's a cation.

Schrödinger's cat is a thought experiment. The cat is supposed to be real in the experiment. The experiment is supposed to be seen as silly.

People can reason through the math at the level of particles and logically there should be no reason why the same quantum logic wouldn't apply to larger systems. So if a bunch of particles can be entangled and if on observation (unrelated to consciousness) the wavefunction collapses (and thereby fully determines reality) then the same should be able to happen with a particle and a more complex system, such as a real li... (read more)

0Manfred
Incorrect. Most physicists today would tell you that schodinger's cat is |alive>+|dead>. If the world simply "splits," then you've got a hidden-variable theory, which has been ruled out by Bell's inequality measurements. Instead what happens is more complicated, and is mathematically equivalent to one-world quantum mechanics.
2Eugine_Nier
Don't take the "splitting" too literally either. Otherwise you've merely replaced the problem of when a wave function collapses, with the problem of when the worlds splits.
0Raemon
That makes reasonable sense, but I assume that the "box" can't just be a box, it has to be a completely sealed environment, where the cat particles can't even react with each other? Or at least with any adjaecent gas particles or passing neutrinos or whatever?
  1. If you're starting out (read: don't yet know what you're doing) then optimize for not getting injured. If you haven't done any weight lifting then you'll get results even if you start out slowly.

  2. Optimize for likelihood of you not quitting. If you manage to stick to whatever plan you make you can always make adjustments where necessary. Risk of quitting is the #1 bottleneck.

  3. Personally, I think you shouldn't look for supplements until you feel you're reached a ceiling with regular workouts. Starting with a strict diet (measure everything) is a good idea if you're serious about this.

9Antisuji
Upvoted, but I disagree with the last sentence. Measuring everything doesn't fit too well with point #2, unless you have an obsessive personality, so adjust accordingly.

Site looks great!

The first sentence is "Here you'll find scholarly material and popular overviews of intelligence explosion and its consequences." which parses badly for me and it isn't clear whether it's supposed to be a title (what this site is about) or just a single-sentence paragraph. I think leaving it out altogether is best.

I agree with the others that the mouse-chimp-Einstein illustration is unsuitable because it's unlikely to communicate clearly to the target audience. I went through the slides of the "The Challenge of Friendly AI&q... (read more)

Welcome to Less wrong!

This may be stating the obvious, but isn't this exactly the reason why there shouldn't be a subroutine that detects "The AI wants to cheat its masters" (or any similar security subroutines)?

The AI has to look out for humanity's interests (CEV) but the manner in which it does so we can safely leave up to the AI. Take for analogy Eliezer's chess computer example. We can't play chess as well as the chess computer (or we could beat Grand Masters of chess ourselves) but we can predict the outcome of the chess game when we play ag... (read more)

Sure, unanimous acceptance of the ideas would be worrying sign. Would it be a bad sign if we were 98% in agreement about everything discussed in the sequences? I think that depends on whether you believe that intelligent people when exposed to the same arguments and the same evidence should reach the same conclusion (Aumann's agreement theorem). I think that disagreement is in practice a combination of (a) bad communication (b) misunderstanding of the subject material by one of the parties (c) poor understanding of the philosophy of science (d) emotions/si... (read more)

Thanks for the explanation, that helped a lot. I expected you to answer 0.5 in the second scenario, and I thought your model was that total ignorance "contaminated" the model such that something + ignorance = ignorance. Now I see this is not what you meant. Instead it's that something + ignorance = something. And then likewise something + ignorance + ignorance = something according to your model.

The problem with your model is that it clashes with my intuition (I can't find fault with your arguments). I describe one such scenario here.

My intuition... (read more)

I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you're no longer dealing with a state of complete ignorance. You're in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you're completely ignorant about its contents (by stipulation) but once you open it that's no longer the case.

The answer for all three envelopes is, in the case of co

... (read more)
1Jack
If you don't know what A is and you don't know what B is and C is the conjunction of A and B, then you don't know what C is. This is precisely because, one cannot assume the independence of A and B. If you stipulate independence then you are no longer operating under conditions of complete ignorance. Strict, non-statistical independence can be represented as A!=B. A!=B tells you something about the hypothesis- its a fact about the hypothesis that we didn't have in complete ignorance. This lets us give odds other than 1:1. See my comment here. With regard to the scenario in the edit, the probability of A & B is 1/6 because we don't know anything about independence. Now, you might say: "Jack, what are the chances A is dependent on B?! Surely most cases will involve A being something that has nothing to do with dice, much less something closely related to the throw of that particular dice." But this kind of reasoning involves presuming things about the domain A purports to describe. The universe is really big and complex so we know there are lots of physical events A could conceivably describe. But what if the universe consisted only of one regular die that rolls once! If that is the only variable then A will =B. That we don't live in such a universe or that this universe seems odd or unlikely are reasonable assumptions only because they're based on our observations. But in the case of complete ignorance, by stipulation, we have no such observations. By definition, if you don't know anything about A then you can't know more about A&B then you know about B. Complete ignorance just means 0.5, its just necessarily the case that when one specifies the hypothesis one provides analytic insight into the hypothesis which can easily change the probability. That is, any hypothesis that can be distinguished from an alternative hypothesis will give us grounds for ascribing a new probability to that hypothesis (based on the information used to distinguish it from alternative hypo

It's purely a formality

I disagree with this bit. It's only purely a formality when you consider a single hypothesis, but when you consider a hypothesis that is comprised of several parts, each of which uses the prior of total ignorance, then the 0.5 prior probability shows up in the real math (that in turn affects the decisions you make).

I describe an example of this here: http://lesswrong.com/r/discussion/lw/73g/take_heed_for_it_is_a_trap/4nl8?context=1#4nl8

If you think that the concept of the universal prior of total ignorance is purely a formality, ... (read more)

In your example before we have any information we'd assume P(A) = 0.5 and after we have information about the alphabet and how X is constructed from the alphabet we can just calculate the exact value for P(A|B). So the "update" here just consists of replacing the initial estimate with the correct answer. I think this is also what you're saying so I agree that in situations like these using P(A) = 0.5 as starting point does not affect the final answer (but I'd still start out with a prior of 0.5).

I'll propose a different example. It's a bit contri... (read more)

I agree with everything you said (including the grandparent). Some of the examples you named are primarily difficult because of the ugh-field and not because of inferential distance, though.

One of the problems is that it's strictly more difficult to explain something than to understand it. To understand something you can just go through the literature at your own pace, look up everything you're not certain about, and so continue studying until all your questions are answered. When you want to explain something you have to understand it but you also have to... (read more)

Finally, on an empirical level, it seems like there are more false n-bit statements than true n-bit statements.

I'm pretty certain this intuition is false. It feels true because it's much harder to come up with a true statement from N bits if you restrict yourself to positive claims about reality. If you get random statements like "the frooble fuzzes violently" they're bound to be false, right? But for every nonsensical or false statement you also get the negation of a nonsensical or false statement. "not( the frooble fuzzes violiently)&qu... (read more)

[ replied to the wrong person ]

[This comment is no longer endorsed by its author]Reply

Legend:

S -> statements
P -> propositions
N -> non-propositional statements
T -> true propositions
F -> false propositions

I don't agree with condition S = ~T + T.

Because ~T + T is what you would call the set of (true and false) propositions, and I have readily accepted the existence of statements which are neither true nor false. That's N. So you get S = ~T + T + N = T + F + N = P + N

We can just taboo proposition and statement as proposed by komponisto. If you agree with the way he phrased it in terms of hypothesis then we're also in agreem... (read more)

As I see it, statements start with some probability of being true propositions, some probability of being false propositions, and some probability of being neither.

Okay. So "a statement, any statement, is as likely to be true as false (under total ignorance)" would be more accurate. The odds ratio remains the same.

The intuition that statements fail to be true most of the time is wrong, however. Because, trivially, for every statement that is true its negation is false and for every statement that is false its negation is true. (Statements that... (read more)

0lessdazed
S=P+N P=T+F T=F S=~T+T N>0 ~~~ ~T+T=P+N ~T+T=T+F+N ~T=F+N ~T=T+N ~T>T

I assume that people in their pre-bayesian days aren't even aware of the existence of the sequences so I don't think they can use that to calculate their estimate. What I meant to get at is that it's easy to be really certain a belief is false if it it's intuitively wrong (but not wrong in reality) and the inferential distance is large. I think it's a general bias that people are disproportionately certain about beliefs at large inferential distances, but I don't think that bias has a name.

(Not to mention that people are really bad at estimating inferential distance in the first place!)

Aspergers and anti-social tendencies are, as far as I can tell, highly correlated with low social status. I agree with you that the test also selects for people who are good at the sciences and engineering. Unfortunately scientists and engineers also have low social status in western society.

First Xachariah suggested I may have misunderstood signaling theory. Then Incorrect said that what I said would be correct assuming LessWrong readers have low status. Then I replied with evidence that I think supports that position. You probably interpreted what I said in a different context.

I think you were too convinced I was wrong in your previous message for this to be true. I think you didn't even consider the possibility that complexity of a statement constitutes evidence and that you had never heard the phrasing before. (Admittedly, I should have used the words "total ignorance", but still)

Your previous post strikes me as a knee-jerk reaction. "Well, that's obviously wrong". Not as an attempt to seriously consider under which circumstances the statement could be true. You also incorrectly claimed I was an ignoramus r... (read more)

2ArisKatsaris
Now that I've cooled off a bit, let me state in detail my complaint against this comment of yours. You seem to be asking for the highest amount of charity towards your statements. To the point that I ought strive for many long minutes to figure out a sense in which your words might be correct, even if I'd have to fix your claim (e.g. turn 'statement' into 'proposition' -- and add after 'any proposition starts out' the parenthetical 'before it is actually stated in words') before it actually becomes correct. But in return you provide the least amount of charity towards my own statements: I kept using the word "seems" in my original response to you (thus showing it may just be a misunderstanding) and I did NOT use the word 'ignoramus' which you accuse me of claiming you to be -- I used the term 'Level-0 rationalist'. You may think it's okay to paraphrase Lesswrong beliefs to show how they might appear to other people, but please don't paraphrase me and then ask for an apology for the words you put in my mouth. That's a major no-no. Don't put words in my mouth, period. No, I did not apologize for calling you a Level-0 rationalist; I still do not apologize for putting you in that category, since that's where your badly chosen words properly assigned you (the vast majority of people who'd say something like "all statements begin with a 50% probability" would truly be Level-0), NOR do I apologize for stating I had placed you in that category: would you prefer if everyone here had just downvoted your article instead of giving you a chance to clarify that (seemingly) terribly wrong position first? Your whole post was about how badly communicated beliefs confer us low status in the minds of others. It was only proper that I should tell you what a status you had achieved in my mind. I don't consider you a Level-0 rationalist anymore. But I consider you an extremely low-level communicator.
7prase
You are probably right, but I would suggest you to phrase your reaction less combatively. Especially the last sentence is superfluous; it doesn't contain any information and only heats up the debate.
4ArisKatsaris
"Any proposition starts out with a 50% probability of being true" is still utterly wrong. Because "any" indicates multiplicity. At the point where you claim these proposition "begin", they aren't even differentiated into different propositions; they're nothing but the abstraction of a letter P as in "the unknown proposition P". I've conceded that you were instead talking about an abstraction of statements, not any actual statements. At this point, if you want to duel it out to the end, I will say that you failed at using language in order to communicate meaning, and you were abstracting words to the point of meaninglessness. edit to add: And as a sidenote, even unknown statements can't be divided into 50% chance of truth and 50% falsehood, as there's always the chances of self-referential contradiction (e.g. the statement "This statement is wrong", which can never be assigned a True/False value), self-referential validity (e.g. The statement "This statement is true", which can be assigned either a true or false value), confusion of terms (e.g. The statement "A tree falling in the woods makes a sound." which depends on how one defines a "sound".), utter meaninglessness ("Colorless green ideas sleep furiously") etc, etc.

I chose the wording carefully, because "I want people to cut off my head" is funny, and the more general or more correct phrasing is not. But now that it has been thoroughly dissected...

Anyway, since you asked twice I'm going to change way the first statement is phrased. I don't feel that strongly about it and if you find it grating I'm also happy to change it to any other phrasing of your choosing.

I'm sorry if I contributed to an environment in which ideas are too criticized

I interpret your first post as motivated on a need to voice your dis... (read more)

1lessdazed
Causation in general and motivation in particular don't work like that. All of my past experiences, excepting none--->me--->my actions Maybe we can think of something. I think it is important to keep track of meta levels when talking about beliefs and their relationship to reality. I think you should stick to doing either of the two sorts of lists I suggested. You say you thought only a single disclaimer was needed, but at least two are: This is a good example of a false belief resembling a LW one. Looking at it tells me a bit about how others might see a LW belief as radical and false, though not everything as I can see how it isn't a LW belief. This is a good example of a true belief phrased to sound unpersuasive and stupid. Looking at it tells me a bit about how others might see a LW belief as radical and false, though not everything as I can see how it is true.

In that case it's clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we're saved by symmetry.

There are a number of different combinations of A and ~A and B and ~B but I think that their conditional "biases" all can... (read more)

0Jack
This can't be right. An unspecified hypothesis can be as many sentence letters and operators as you like, we still don't have any information about it's content and so can't have any P other than 0.5. Take any well-formed formula in propositional logic. You can make that formula say anything you want by the way you assign semantic content to the sentence letters (for propositional logical, not the predicate calculus where can specify indpendence). We have conventions where we don't do silly things like say "A AND ~B" and then have B come out semantically equivalent to ~A. It is also true that two randomly chosen hypotheses from a large set of mostly independent hypotheses are likely to be independent. But this is a judgment that requires knowing something about the hypothesis: which we don't, by stipulation. Note, it isn't just causal dependence we're worried about here: for all we know A and B are semantically identical. By stipulation we know nothing about the system we're modeling- the 'space of all propositions' could be very small. The answer for all three envelopes is, in the case of complete ignorance, 0.5.
3benelliott
Okay, in that case I guess I would agree with you, but it seems a rather vacuous scenario. In real life you are almost never faced with the dilemma of having to evaluate the probability of a claim without even knowing what that claim is, it appears in this case that when you assign a probability of 0.5 to an envelope you are merely assigning 0.5 probability to the claim that "whoever filled this envelope decided to put a true statement in". When, as in almost all epistemological dilemmas, you can actually look at the claim you are evaluating, then even if you know nothing about the subject area you should still be able to tell a conjunction from a disjunction. I would never, ever apply the 0.5 rule to an actual political discussion, for example, where almost all propositions are large logical compounds in disguise.
Load More