All of aaronsw's Comments + Replies

aaronsw20

I guess you need to do some more thinking to straighten out your views on qualia.

0[anonymous]
downvoted posthumously.
5Exiles
Goodnight, Aaron Swartz.
0hairyfigment
Let's back up for a second: * You've heard of functionalism, right? You've browsed the SEP entry? * Have you also read the mini-sequence I linked? In the grandparent I said "physical reaction" instead of "functional", which seems like a mistake on my part, but I assumed you had some vague idea of where I'm coming from.
-1MugaSofer
Or you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter. If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?
aaronsw90

Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight -- it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.

The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced... (read more)

-2MugaSofer
How do you know this?
0Shmi
Ok, that's where we disagree. To me the subjective experience is the process in my brain and nothing else.
-1Peterdjones
There's no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.
aaronsw10

Because the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

1Shmi
I don't understand what else is there.
0Peterdjones
There's no certainty either way.
aaronsw00

They're not assumptions, they're the answers to questions that have the highest probability going for them given the evidence.

0Shmi
and why not?
aaronsw-10

Who said anything about our intuitions (except you, of course)?

1hairyfigment
You keep making statements like, And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha's physical reaction would 'be' a quale. So where do we go from there? (Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn't connect it to anything else - no similarities, no differences, no links of any kind. Would you see anything?)
aaronsw00

I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don't see how point one holds (we experience it), and the argument obviously doesn't go through.

aaronsw10

Because it's the only thing in the universe we've found with a first-person ontology. How else do you explain it?

-1MugaSofer
Well, I probably can't explain it as eloquently as others here - you should try the search bar, there are probably posts on the topic much better than this one - but my position would be as follows: * Qualia are experienced directly by your mind. * Everything about your mind seems to reduce to your brain. * Therefore, qualia are probably part of your brain. Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can't imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call "qualia" might simply be part of, y'know, data processing.
aaronsw20

Well, let's be clear: the argument I laid out is trying to refute the claim that "I can create a human-level consciousness with a Turing machine". It doesn't mean you couldn't create an AI using something other than a pure Turing machine and it doesn't mean Turing machines can't do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn't going to keep you alive.

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontolog... (read more)

2nshepperd
Something brains do, obviously. One way or another. I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can't describe computation in physical terms. Searle just makes these assumptions. If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
aaronsw40

I guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like "money" and "human rights"; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it's properties of minds, not particles, there's still a lot of work left to do.)

aaronsw-10

Beginning an argument for the existence of qualia with a bare assertion that they exist

Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?

I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's... (read more)

1pjeby
I think that anyone talking seriously about "qualia" is confused, in the same way that anyone talking seriously about "free will" is. That is, they're words people use to describe experiences as if they were objects or capabilities. Free will isn't something you have, it's something you feel. Same for "qualia". Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven't covered that much of the sequences homework, it's unlikely that you'll find this discussion especially enlightening. (More to the point, you're doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.) This is probably a good answer to that question. Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.
2Shmi
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
0hairyfigment
Well, would that mean writing a series like this? My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
-2MugaSofer
Could you expand on this point, please? It generally agreed* that "free will vs determinism" is a dilemma that we dissolved long ago. I can't see what else you could mean by this, so ... [*EDIT: here, that is]
aaronsw40

I was talking about Searle's non-AI work, but since you brought it up, Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

-2Peterdjones
It isn't even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain's concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn't use the word "qualia", although he often seems to be talking about the same thing).
2MugaSofer
There's your problem. Why the hell should we assume that "qualia is clearly a basic fact of physics "?
4Ben Pace
To offer my own reasons for disagreement, I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we've done things, and occasionally we notice and can report that we've noticed that we've done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called 'quaila'. That we notice that we find experience 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called 'experiences'). There is nothing mysterious here, and the word 'qualia' always seems to be used mysteriously - so I don't think the first point carries the weight it might appear to. Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn't be doing consciousness, doesn't mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle's philosophy.
2TheOtherDave
Another not-speaking-for-LW answer: Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don't really care what name we attach to those causes... what matters is the thing and how it relates to other things, not the label. That said, in general I think the label "qualia" causes more trouble due to conceptual baggage than it resolves, much like the label "soul". Re #2: This argument is oversimplistic, but I find the conclusion likely. More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it's possible that the causes of those aspects reside outside my brain. That said, I don't find it likely; I'm inclined to agree that the causes of my experience reside in my brain. I still don't care much what label we attach to those causes, and I still think the label "qualia" causes more confusion due to conceptual baggage than it resolves. Re #3: I see no reason at all to believe this. The causes of experience are no more "clearly a basic fact of physics" than the causes of gravity; all that makes them seem "clearly basic" to some people is the fact that we don't understand them in adequate detail yet.
3nshepperd
I can't really speak for LW as a whole, but I'd guess that among the people here who don't believe¹ "qualia doesn't exist", 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the "boring AI" proposition, that you can make computers do reasoning, and Searle's "strong AI" thing he's trying to refute, which says that AIs running on computers would have both consciousness and some magical "intentionality". "Strong AI" shouldn't actually concern us, except in talking about EMs or trying to make our FAI non-conscious. Pretty much disagree. Really disagree. And this seems really unlikely. ¹ I qualify my statement like this because there is a long-standing confusion over the use of the word "qualia" as described in my parenthetical here.
0pjeby
The whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.) I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.: 1. Free will exists (because: we experience it) 2. The brain causes free will (because if you cut off any part, etc.) 3. If you simulate a brain with a Turing machine, it won't have free will because clearly it's a basic fact of physics and there's no way to tell just using physics whether something is a machine simulating a brain or not. It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!" Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work. Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Sear
aaronsw60

I guess I must have misunderstood something somewhere along the way, since I don't see where in this sequence you provide "constructive accounts of how to build meaningful thoughts out of 'merely' effective constituents" . Indeed, you explicitly say "For a statement to be ... true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links." This strikes me as parallel to Searle's view that consciousness imposes meaning.

But, more generally, Searle says his life's work is to explain h... (read more)

6pjeby
Someone should tell him this has already been done: dissolving that kind of confusion is literally part of LessWrong 101, i.e. the Mind Projection Fallacy. Money and human rights and so forth are properties of minds modeling particles, not properties of the particles themselves. That this is still his (or any other philosopher's) life's work is kind of sad, actually.
1Eliezer Yudkowsky
Why? Did I mention consciousness somewhere? Is there some reason a non-conscious software program hooked up to a sensor, couldn't do the same thing? I don't think Searle and I agree on what constitutes a physical particle. For example, he thinks 'physical' particles are allowed to have special causal powers apart from their merely formal properties which cause their sentences to be meaningful. So far as I'm concerned, when you tell me about the structure of something's effects on the particle fields, there shouldn't be anything left after that - anything left is extraphysical.
aaronsw10

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.

EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.

0Eliezer Yudkowsky
So... admittedly my main acquaintance with Searle is the Chinese Room argument that brains have 'special causal powers', which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I've been rendering constructive accounts of how to build meaningful thoughts out of "merely" effective constituents! What part of Searle is supposed to be parallel to that?
-1pjeby
Perhaps I'm confused, but isn't Searle the guy who came up with that stupid Chinese Room thing? I don't see at all how that's remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it's impossible to have AI, after all, and argues from the bottom line for that position.)
aaronsw40

That's a good explanation of how to do Solomonoff Induction, but it doesn't really explain why. Why is a Kolmgorov complexity prior better than any other prior?

aaronsw00

I agree with EY that collapse interpretations of QM are ridiculous but are there any arguments against the Bohm interpretation better than the ones canvassed in the SEP article?

http://plato.stanford.edu/entries/qm-bohm/#o

2Manfred
Conflict with special relativity is the most common decisive reason for rejecting Bohmian mechanics - which is oddly not covered in the SEP article. Bohmian mechanics is nonlocal, which in the context of relativity means time travel paradoxes. When you try to make a relativistic version of it, instead of elegant quantum field theory, you get janky bad stuff.
-2Vaniver
Not that I know of, but, in my interpretation preference Bohm is only beat by "shut up and calculate," so I may not be the most informed source.
aaronsw20

Someone smart recently argued that there's no empirical evidence young earth creationists are wrong because all the evidence we have of the Earth's age is consistent either hypothesis that God created the earth 4000 years ago but designed it to look like it was much older. Is there a good one-page explanation of the core LessWrong idea that your beliefs need to be shifted by evidence even when the evidence isn't dispositive as versus the standard scientific notion of devastating proof? Right now the idea seems smeared across the Sequences.

-2MugaSofer
Personally, I always argue that if God created the world recently, he specifically designed it to look old; he included light from distant stars, fossils implying evolution, and even created radioactive elements pre-aged. Thus, while technically the Earth may be young, evolution etc. predict what God did with remarkable accuracy, and thus we should use them to make predictions. Furthermore, if God is so determined to deceive us, shouldn't we do as he wants? :P
8MinibearRex
Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they're overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.
1DanielLC
He's not entirely wrong. Essentially, the more evidence you find of the Earth being more than 4000 years old, the more evidence you have against a non-deceiving god having created it 4000 years ago. If there's a 0.1% chance that a god will erase all evidence of his existence, then we can only get 20 bits of evidence against him. The problem is most likely that he's overestimating the probability of a god being deceitful (conjunction fallacy), and that he's forgetting that it's equally impossible to find evidence for such a god (conservation of expected evidence).
1mwengler
If you are trying to explain the fossil, geological, and astronomical record, you might consider two hypotheses: 1) the details reflect the process that put these in place and current physical constants put the time for that to happen based on that record in the billions of years 2) somebody or something "God" for which we have little other evidence other than the world and universe created it all about 4000 years ago and made it look like a billions year project. In the 2nd case, you take on the additional burden of explaing the existence and physics of God. Explaining why God would want to trick us is probably easier than explaining God's existence and physics in the first place. I am reminded of Wg's statement "Believing you are in a sim is not distinguishable from believing in an omnipotent god (of any type)." Certainly, a sim would have the property that it would be much younger than it appeared to be, that the "history" built in to it would not be consistent with what actually appeared to have happened. Indeed, a sim seems to mean a reality which appears to be one thing but is actually another quite different thing created by powerful consciousnesses that are hiding their existence from us.
5TrE
IIRC the main post about this concept is conservation of expected evidence.
2Vaniver
The articles that come to mind are Scientific Evidence, Legal Evidence, and Rational Evidence and Making Beliefs Pay Rent (in Anticipated Experiences).
0[anonymous]
Try this and let me know if it's what you're looking for.
aaronsw30

I don't totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality ("In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model."). Fredkin and Wolfram probably also have similar discussions.

3gwern
Jackman only just released his data now (after twittering with me, incidentally, I was able to explain why his R Brier score wasn't matching his hand-calculated Brier score) because he forgot to send it to me last night; and I'm running on fumes - we started this project from scratch yesterday at 5PM and I've been working on it ever since. EDIT: Looks like all the kerfluffle of new Brier/RMSE scores prodded Sam Wang into releasing his precise predictions too! Neat. EDITEDIT: I've gotten Jackman's data, incorporated it, discovered an error in my own data, differed with Jackman, learned he regarded 5 states as such a sure thing he didn't include probabilities while I had simply put in NAs, and now we've converged on his Brier score. Phew! His current Brier score is 0.009713686, a bit worse than Silver's 0.009113725, and both seem to be outperformed by Drew Linzer's 0.003843257. Wang seems to've released the data, but the CSV is unlabeled and I have no idea what half the columns mean... I'd also like to include a random-guesser equivalent for RMSE... Tomorrow.
0lukeprog
We only included people whose Brier scores we could calculate ourselves. We plan to add Jackman when we get his data.
aaronsw280

Wolfram 2002 argues that spacetime may actually be a discrete causal network and writes:

The idea that space might be defined by some sort of causal network of discrete elementary quantum events arose in various forms in work by Carl von Weizsäcker (ur-theory), John Wheeler (pregeometry), David Finkelstein (spacetime code), David Bohm (topochronology) and Roger Penrose (spin networks; see page 1055).

Later, in 10.9, he discusses using graphical causal models to fit observed data using Bayes' rule. I don't know if he ever connects the two points, though.

5private_messaging
Few words: CPT symmetry. The causality is just how we model things at macroscopic scale where the time symmetry is broken thermodynamically. At the bottom level, it's relations that have to be true, and to say that value of one variable in a relation causes value of other variable in a relation is either a minor language misuse or a gross misunderstanding arising from such misuse.
aaronsw110

Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

aaronsw90
  • Doing hacker exercises every morning
  • Taking a cold shower every morning
  • Putting on pants
  • Lying flat on my back and closing my eyes until I consciously process all the things that are nagging at me at begin to feel more focused
  • Asking someone to coach me through getting started on something
  • Telling myself that doing something I don't want to do will make me stronger
  • Squeezing a hand grip exerciser for as long as I can (inspired by Muraven 2010; mixed results with this one)

You?

1lukeprog
My interventions for energy are less creative: drink water, do jumping jacks, take drugs, etc.
aaronsw90

It's been two weeks. Can you post it now?

4[anonymous]
Indeed I can, and thank you for reminding me: D_Malik d_livers. I didn't see your comment earlier because I switched accounts to stop using a pseudonym (as you can see), and I haven't been browsing the internet much lately because I'm doing my anki backlog, which I have because I was away from home for three weeks doing SPARC and other things, which, together with the fact that my anki "ideas" deck was corrupt (because I copied it over to my ipad before first closing anki) and the fact that I couldn't de-corrupt it on my ipad and didn't have my laptop with me, made me unable to post it at the time of the grandparent comment.
aaronsw90

Has anyone seriously suggested you invented MWI? That possibility never even occurred to me.

9Eliezer Yudkowsky
It's been suggested that I'm the one who invented the idea that it's obviously true rather than just one more random interpretation; or even that I'm fighting a private war for some science-fiction concept, rather than being one infantry soldier in a long and distinguished battle of physicists. Certainly your remark to the extent that "he should try presenting his argument to some skeptical physicists" sounds like this. Any physicist paying serious attention to this issue (most people aren't paying attention to most things most of the time) will have already heard many of the arguments, and not from me. It sounds like we have very different concepts of the state of play.
aaronsw210

The main insight of the book is very simple to state. However, the insight was so fundamental that it required me to update a great number of other beliefs I had, so I found being able to read a book's worth of examples of it being applied over and over again was helpful and enjoyable. YMMV.

aaronsw20

Unlike, say, wedrifid, whose highly-rated comment was just full of facts!

3Jayson_Virissimo
... If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.
aaronsw80

It seems a bit bizarre to say I've dismissed LessWrong given how much time I've spent here lately.

0John_Maxwell
Fair enough.
aaronsw230

FWIW, I don't think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.

wedrifid120

my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.

I like the way you phrase it (the "lukeprog" charity). Probably true at that.

aaronsw350

You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.

No, I'd love another example to use so that people don't have this kind of emotional reaction. Please suggest one if you have one.

UPDATE: I thought of a better example on the train today and changed it.

0magfrump
Upvoted the main article due to this.
aaronsw40

Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?

Well, for example, if EY is so confident that he's proven "MWI is obviously true - a proposition far simpler than the argument for supporting SIAI", he should try presenting his argument to some skeptical physicists. Instead, it appears the physicists who have happened to run across his argument found it severely flawed.

How rational is it to think that you've found a proof most physicists are wrong and then never run it by any physicis... (read more)

9Eliezer Yudkowsky
BTW, it's important to note that by some polls an actual majority of theoretical physicists now believe in MWI, and this was true well before I wrote anything. My only contributions are in explaining the state of the issue to nonphysicists (I am a good explainer), formalizing the gross probability-theoretic errors of some critiques of MWI (I am a domain expert at that part), and stripping off a lot of soft understatement that many physicists have to do for fear of offending sillier colleagues (i.e., they know how incredibly stupid the Copenhagen interpretation appears nowadays, but will incur professional costs from saying it out loud with corresponding force, because there are many senior physicists who grew up believing it). The idea that Eliezer Yudkowsky made up the MWI as his personal crackpot interpretation isn't just a straw version of LW, it's disrespectful to Everett, DeWitt, and the other inventors of MWI. It does seem to be a common straw version of LW for all that, presumably because it's spontaneously reinvented any time somebody hears that MWI is popular on LW and they have no idea that MWI is also believed by a plurality and possibly a majority of theoretical physicists and that the Quantum Physics Sequence is just trying to explain why to nonphysicists / formalize the arguments in probability-theoretic terms to show their nonambiguity.

it appears the physicists who have happened to run across his argument found it severely flawed

The criticisms at those links have nothing to do with the argument for MWI. They are just about a numerical mistake in an article illustrating how QM works.

The actual argument for MWI that is presented is something like this: Physicists believe that the wavefunction is real and that it collapses on observation, because that is the first model that explained all the data, and science holds onto working models if they are falsified. But we can also explain all... (read more)

9Eliezer Yudkowsky
There were plenty of physicists reading those posts when they first came out on OB (the most famous name being Scott Aaronson). Some later readers have indeed asserted that there's a problem involving a physically wrong factor of i in the first couple of posts (i.e. that's allegedly not what a half-silvered mirror does to the phase in real life), which I haven't yet corrected because I would need to verify with a trusted physicist that this was correct, and then possibly craft new illustrations instead of using the ones I found online, and this would take up too much time relative to the point that talking about a phase change of -1 instead of i so as to be faithful to real-world mirrors is an essentially trivial quibble which has no effect on any larger points. If anyone else wants to rejigger the illustration or the explanation so that it flows correctly, and get Scott Aaronson or another known trusted physicist to verify it, I'll be happy to accept the correction. Aside from that, real physicists haven't objected to any of the math, which I'm actually pretty darned proud of considering that I am not a physicist.
0John_Maxwell
I agree that EY is probably overconfident in MWI, although I'm uniformed about QM so I can't say much with confidence. I don't think it's accurate to damn all of Less Wrong because of this. For example, this post questioning the sequence was voted up highly. I don't think EY claims to have any original insights pointing to MWI. I think he's just claiming that the state of the evidence in physics is such that MWI is obviously correct, and this is evidence as to the irrationality of physicists. I'm not too sure about this myself. Well there have been responses to that point (here's one). I wish you'd be a bit more self-skeptical and actually engage with that (ongoing) debate instead of summarizing your view on it and dismissing LW because it largely disagrees with your view.
aaronsw41

I think the biggest reason Less Wrong seems like a cult is because there's very little self-skepticism; people seem remarkably confident that their idiosyncratic views must be correct (if the rest of the world disagrees, that's just because they're all dumb). There's very little attempt to provide any "outside" evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).

Instead, when someone points this out, Eliezer fumes "do you know what pluralistic ignorance is, and Asch... (read more)

-1MugaSofer
Your examples seem ... how do I put this ... unreliable. The first two are less examples and more insults, since you do not provide any actual examples of these tendencies; the last one would be more serious, if he hadn't written extensively on why he believes this to be the safest way - the only way that isn't suicidal - or if you had provided some evidence that his FAI proposals are "extremely dangerous". And, of course, airily proclaiming that this is true of "pretty much every entry in the sequences" seems, in the context of these examples, like an overgeneralization at best and ... well, I'm not going to bother outlining the worst possible interpretation for obvious reasons.
3John_Maxwell
Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view? ---------------------------------------- My read on your comment is: LWers don't act humble, therefore they are crackpots. I agree that LWers don't always act humble. I think it'd be a good idea for them to be more humble. I disagree that lack of humility implies crackpottery. In my mind, crackpottery is a function of your reasoning, not your mannerisms. Your comment is a bit short on specific failures of reasoning you see--instead, you're mostly speaking in broad generalizations. It's fine to have general impressions, but I'd love to see a specific failure of reasoning you see that isn't of the form "LWers act too confident". For example, a specific proposition that LWers are too confident in, along with a detailed argument for why. Or a substantive argument for why SI's approach to AI is "extremely dangerous". (I personally know pretty much everyone who works for SI, and I think there's a solid chance that they'll change their approach if your argument is good enough. So it might not be a complete waste of time.) Now it sounds like you're deliberately trying to be be inflammatory ಠ_ಠ
aaronsw10

Yvain's argument was that "x-rationality" (roughly the sort of thing that's taught in the Sequences) isn't practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can't have big effects because the world is too noisy, just seems like another excuse for avoiding reality.

5CarlShulman
What effect size, assessed how, against what counterfactuals? If it's just "I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right" we're in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren't very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data. Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what's your distribution of effect sizes?
aaronsw160

I really enjoyed The Seven Habits of Highly Effective People. (By contrast, I tried reading some @pjeby stuff yesterday and it had all the problems you describe cranked up to 11 and I found it incredibly difficult to keep reading.)

I don't think the selection bias thing would be a problem if the community was focused on high-priority instrumental rationality techniques, since at any level of effectiveness becoming more effective should be a reasonably high priority. (By contrast, if the community is focused on low-priority techniques it's not that big a de... (read more)

5pjeby
Technically, John was describing the problems of analytical readers, rather than the problems of self-help writers. ;-) I have noticed, though, that some of my early writing (e.g. 2010 and before) is very polarizing in style: people tend to either love it or hate it, and the "hate it" contingent seems larger on LW than anywhere else. However, most of the people who've previously said on LW that they hate my writing, seemed to enjoy this LW post, so you may find something of use there.
aaronsw190

Carol Dweck's Mindset. While unfortunately it has the cover of a self-help book, it's actually a summary of some fascinating psychology research which shows that a certain way of conceptualizing self-improvement tends to be unusually effective at it.

2Dorikka
Reviews seem to indicate that the book can and should be condensed into a couple quality insights. Is there any reason to buy the actual book?
2Pablo
I took a look at Mindset. The book seemed to me extremely repetitive and rambling. Its teachings could be condensed in an article ten or fifteen times shorter. Fortunately, this Stanford Magazine piece seems to accomplish something close to that. So, read the piece, and forget the book.
aaronsw190

My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.

aaronsw30

Two people have been confused by the "arguing about ideas" phrase, so I changed it to "thinking about ideas".

2Manfred
It's more polite, and usually more accurate, to say "I sent a message I didn't want to, so I changed X to Y."
aaronsw50

Ray Dalio's "Principles". There's a bunch of stuff in there that I disagree with, but overall he seems pretty serious about tackling these issues -- and apparently has been very successful.

aaronsw160

Use direct replies to this comment for suggesting things about tackling practical biases.

6D_Malik
Buy some nicotine gum and chew that while doing useful stuff, like working out, doing SRS reviews, thinking really hard about important things, etc.. Of course you should read up on nicotine gum before you do this. Start here.
D_Malik110

Set a ten-minute timer and make a list of all the things you could do that would make you regret not doing them sooner. And then do those things.

I have a pretty long list like this that I try to look at every day, but I can't post it for the next two weeks for a complicated, boring reason.

aaronsw190

Carol Dweck's Mindset. While unfortunately it has the cover of a self-help book, it's actually a summary of some fascinating psychology research which shows that a certain way of conceptualizing self-improvement tends to be unusually effective at it.

8aaronsw
lukeprog's writings, especially Build Small Skills in the Right Order.
5aaronsw
Ray Dalio's "Principles". There's a bunch of stuff in there that I disagree with, but overall he seems pretty serious about tackling these issues -- and apparently has been very successful.
aaronsw80

Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is "keep NASA at current funding levels and increase funding for nuclear weapons research" then you should be very suspicious.

gwern350

I think you're missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.

Your application of cynicism proves everything, and so proves nothing. Every strategy can be - rightly - pointed out to benefit some group and disadvantage some other group.

The only time this wouldn't apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so could... (read more)

aaronsw50

Can you point to something I said that's you think is wrong?

My understanding of the history (from reading an interview with Eliezer) is that Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality. But whether that's true or not, I don't see how that's inconsistent with the notion that Eliezer and a bunch of people similar to him are suffering from motivated reasoning.

I also don't see how I conflated LW and SI. I said many LW readers worry about UFAI and that SI has taken the position that the best way to address this worry is to do philosophy.

0Manfred
You're right that you can interpret FAI as motivated reasoning. I guess I should have considered alternate interpretations more. Well, kinda. Eliezer concluded the singularity was the most important thing to work on and then decided the best way to work on it was to code an AI as fast as possible, with no particular regard for safety. "[...] arguing about ideas on the internet" is what I was thinking of. It's a LW-describing sentence in a non-LW-related area. Oh, and "Why rationalists worry about FAI" rather than "Why SI worries about FAI."
aaronsw30

Right. I tweaked the sentence to make this more clear.

aaronsw20

Yes, "arguing about ideas on the Internet" is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).

-3[anonymous]
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone? If that isn't engineering, then what is programming (writing math that computers understand)?
aaronsw20

There's nothing wrong with arguing on the Internet. I'm merely asking whether the belief that "arguing on the Internet is the most important thing anyone can do to help people" is the result of motivated reasoning.

9David_Gerard
The argument I see is that donating money to SIAI is the most important thing anyone can do to help people.
aaronsw250

On the question of the impact of rationality, my guess is that:

  1. Luke, Holden, and most psychologists agree that rationality means something roughly like the ability to make optimal decisions given evidence and goals.

  2. The main strand of rationality research followed by both psychologists and LWers has been focused on fairly obvious cognitive biases. (For short, let's call these "cognitive biases".)

  3. Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational. For example, it's very clear

... (read more)
5lukeprog
For the record, I basically agree with all this.
aaronsw-10

Then it does seem like your AI arguments are playing reference class tennis with a reference class of "conscious beings". For me, the force of the Tool AI argument is that there's no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden's Tool AI: you'd feed it data, it'd make predictions, you could cho... (read more)

aaronsw30

"it will have conscious observers in it if it performs computations"

So your argument against Bohm depends on information functionalism?

Load More