Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Taking Ideas Seriously

51 Post author: Will_Newsome 13 August 2010 04:50PM

I, the author, no longer endorse this post.


 

Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.

 

Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.

The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).

Taking an idea seriously means:

  • Looking at how a new idea fits in with your model of reality and checking for contradictions or tensions that may indicate the need of updating a belief, and then propagating that belief update through the entire web of beliefs in which it is embedded. When a belief or a set of beliefs change that can in turn have huge effects on your overarching web of interconnected beliefs. (The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.) Failing to propagate that change leads to trouble. Compartmentalization is dangerous.
  • Noticing when an idea seems to be describing a part of the territory where you have no map. Drawing a rough sketch of the newfound territory and then seeing in what ways that changes how you understand the parts of the territory you've already mapped.
  • Not just examining an idea's surface features and then accepting or dismissing it. Instead looking for deep causes. Not internally playing a game of reference class tennis.
  • Explicitly reasoning through why you think the idea might be correct or incorrect, what implications it might have both ways, and leaving a line of retreat in both directions. Having something to protect should fuel your curiosity and prevent motivated stopping.
  • Noticing confusion.
  • Recognizing when a true or false belief about an idea might lead to drastic changes in expected utility.

There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:

  • Existential risks and the possibilities for methods of prevention thereof.
  • Molecular nanotechnology.
  • The technological singularity (especially timelines and planning).
  • Cryonics.
  • World economic collapse.

Some potentially important ideas that I readily admit to not yet having taken seriously enough:

  • Molecular nanotechnology timelines.
  • Ways to protect against bioterrorism.
  • The effects of drugs of various kinds and methodologies for researching them.
  • Intelligence amplification.

And some ideas that I did not immediately take seriously when I should have:

  • Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).
  • The subjective for-Will-Newsome-personally irrationality of cryonics.1
  • EMP attacks.
  • Updateless-like decision theory and the implications thereof.
  • That philosophical and especially metaphysical intuitions are not strong evidence.
  • The idea of taking ideas seriously.
  • And various things that I probably should have taken seriously, and would have if I had known how to, but that I now forget because I failed to grasp their gravity at the time.

I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.

Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.

Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2

What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?

 


I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.

2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany: 

If believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.

Comments (257)

Comment author: MarkusRamikin 01 June 2012 02:00:29PM 24 points [-]

I, the author, no longer endorse this post.

Why? Did Will ever explain this?

Comment author: dspeyer 07 February 2011 05:07:08AM 20 points [-]

Don't take ideas seriously unless you can take uncertainty seriously.

Taking uncertainty seriously is hard. Pick a belief. How confident are you? How confident are you that you're that confident?

The natural inclination is to guess way to high on both of those. Not taking ideas seriously acts as a countermeasure to this. It's an over-broad countermeasure, but better than nothing if you need it.

Comment author: PeerInfinity 27 August 2010 09:46:52PM 17 points [-]

Warning: This comment consists mostly of unreliably-remembered anecdotal evidence.

When I read the line "The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.", my immediate emotional reaction was "No!!! You don't want this experience!!! It's terrifying!!! Really terrifying!!!" And I didn't notice any exhilaration when it happened to me. Ok, there were some things that were a really big relief, but nothing I wouldn't consider exhilarating. I guess I'll talk about it some more...

The big, main push of my deconversion happened during exam time, in... what was it? my second year of university? Anyway, I had read Eliezer's writings a few days (weeks? months?) ago, and had finally gotten around to realizing that yes, seriously, there is no god. At the time, I had long since gotten into the habit of treating my mind's internal dialogue as a conversation with god. And I had grown dependent on my habit of constantly asking god for help with everything I felt insecure about. And I felt insecure about pretty much everything. Especially those university exams. I still remember the terror of sitting in that room, with the exam paper on the desk, knowing that I wasn't as prepared as I should have been, and not having a god to ask for help to make things turn out ok anyway. Noticing myself silently mouthing the words of the prayers just out of habit, and then stopping myself when I realize that it's pointless and probably counterproductive... and then forgetting and unconsciously starting the prayers again, and catching myself again... and that went on for... what was it? days? weeks? months? years?

Anyway, back to the topic... "propagating that belief update through the entire web of beliefs in which it is embedded" isn't just something that you can do all at once and be done with it. If you're updating a core belief, then you're going to constantly find yourself noticing beliefs that need updating. And more often, you'll find yourself not noticing things that need updating, and not finding out about them until you notice some other problem, spend a lot of time tracing back to the cause of it, and then noticing some particular belief or habit that's still having a serious effect on your actions, but that doesn't have any justification in your current belief system, now that the false belief is removed.

And then there's the false positives, of things that you think are still being caused by the incompletely updated belief, but that really aren't...

Anyway, what I'm trying to say is... don't envy people who previously took religion seriously, then realized they were wrong, and then had to go through the long, tedious, terrifying process of updating their entire belief system. Personally, I think that I would have been much better off if I had started with a healthy belief system, rather than having the experience of updating from an extremely unhealthy belief system. Or maybe not, I don't know.

Comment author: Jonathan_Graehl 13 August 2010 10:00:21PM 9 points [-]

It seems like you're vulnerable to time-wasting Doom memes. But perhaps you're aesthetically/heuristically selective about which you take seriously. And perhaps it's this obsessing you do that gives you not just time served in a frenzy of caring, but actually true (and possibly instrumental) ideas as a byproduct.

Comment author: Will_Newsome 13 August 2010 10:27:36PM 4 points [-]

I'm also vulnerable to ideas that seem like they could lead to gaining infinite computing power in finite time. Being a bounded agent means I care only finitely much about infinite utility, but I still look into lots of ways that one could get infinite computing power that I'm sure most people would ignore outright.

I'm not sure what it means to be vulnerable to time-wasting Doom memes. I spend at the very most six hours a day really researching the possibility, probability, and survivability of Doom. Most days I spend 2 hours. I guess I could spend that time learning to play the piano or summat, but that'd feel kinda weak by comparison. And I have all those other hours to learn how to play piano and paint and cook and be awesome at everything. And on top of it seemingly being an extremely good use of my time, it's fun for me as a nerd to be on the forefront of certain kinds of metaphysics and decision theory research.

The kind of Doom memes I take seriously are the ones that seem the most probable, of course. uFAI for instance seems really damn probable. The heuristics I use are the ones I outline in my post above about how to take ideas seriously. If I run an idea through those heuristics, and throw the kitchen sink of Less Wrong rationality techniques at it, then I start to take it rather seriously.

Comment author: Jonathan_Graehl 14 August 2010 05:02:55AM *  0 points [-]

I didn't mean to imply that all such thoughts are a waste, or that any of the usual worries around here are silly. I meant that if you really feel obligated to take seriously claims of alarming differences in utility, that you'd end up wasting a time digging through ridiculous religious claims. Clearly it's not the case that you do this.

Comment author: Will_Newsome 17 August 2010 09:11:52AM 2 points [-]

Hm, I wonder how many atheists have taken Pascal's wager seriously. If I'm not confident of the flaws of majoritarianism then failing to Aumann update on the testimony of a billion Christians would seem to be a bad idea. And if I think that the belief of a billion Christians is even small evidence that a Christian god is more likely to control most of the measure of computations that include myself than any other god then the atheist-god wager argument doesn't save me from having to disregard a possibility for infinite utility. But perhaps I forget the stronger arguments against Pascal's wager. At any rate, you're right that I don't go around looking for ridiculous religious claims to worry about, but I'm at least willing to take Pascal's wager a little bit seriously. (Failing to do so can also lead to falling into the Pascal's wager fallacy fallacy.)

Comment author: FAWS 17 August 2010 09:29:39AM *  5 points [-]

You don't just have to worry about one specific atheist-god, but also any jealous gods, any singular god that would consider beliefs about a singular god beliefs about themself, and feel insulted by being thought to be like what JHWH is supposed to be like, any god that punishes giving in to imagined blackmail (hell) just to make blackmail less likely, and so on. These aren't symmetric because e. g. anti-jealous gods that reward worship of very different gods, including one particular very jealous god, seem less likely than jealous gods.

Comment author: Will_Newsome 17 August 2010 09:42:43AM 3 points [-]

Hmuh, I'd never exactly thought of thinking about YHWH as a blackmailing simulator AI, but in an ensemble universe that description seems to fit. That's pretty funny. :)

Comment author: Jonathan_Graehl 17 August 2010 05:30:14PM *  1 point [-]

Agreed - this is the usual response, and the one that works for me if I can't quite muster up the confidence to say "0% probability for infinite-torture JHWH (or variation)". I guess you can justify something like p=0 with a combination of: "you haven't defined what you mean by JHWH sufficiently for me to agree or disagree", "ok, you've told me enough that I see JHWH as a logical impossibility". Once a hypothetical god passes those bars, then you need recourse to all the possible god hypotheses. Priveleging the Hypothesis is a finite-scale version of the same objection.

Comment author: Wei_Dai 14 August 2010 12:40:42AM 8 points [-]

Does anyone not have any problems with taking ideas seriously? I think I'm in this category because ideas like cryonics, the Singularity, UFAI, and Tegmark's mathematical universe were all immediately obvious to me as ideas to take seriously, and I did so without much conscious effort or deliberation.

You mention Eliezer and Michael Vassar as people good at taking ideas seriously. Do you know if this came to them naturally, or was it a skill they gained through practice?

Comment author: Will_Newsome 14 August 2010 03:06:26AM *  4 points [-]

Does anyone not have any problems with taking ideas seriously?

I recall you asked a similar question near the end of the decision theory workshop. I think that every long term SIAI member has no problem with this skill (though of course there's some variance, and it's hard to know what everyone is thinking; also some are more consistent than others). Outside of SIAI there seem to be a lot less examples, but a few names come to mind. (Wei Dai is one of them.)

You mention Eliezer and Michael Vassar as people good at taking ideas seriously. Do you know if this came to them naturally, or was it a skill they gained through practice?

I have no idea about Michael Vassar. I do know that he seems to have had this skill for many years at least from various papers and comments I've seen of his that were way ahead of everyone else at the time when it came to identifying the most relevant and critical arguments. But it does seem like Eliezer was born with a natural predisposition towards this kind of rationality if the examples from his childhood and teenage years found in the sequences are considered reasonably accurate.

Comment author: Wei_Dai 15 August 2010 09:43:41AM 3 points [-]

It seems like this post could use some more empirical data, and you're probably in a good position to gather it. You said that every long term SIAI member has no problem with this skill (which makes sense because if they did have a serious problem with this skill they probably wouldn't have become a long term SIAI member in the first place) but how did they become that way? What kind of things did they find useful for getting better at it?

Comment author: cousin_it 14 August 2010 09:29:12AM *  2 points [-]

For what it's worth, I have a strong injunction against taking ideas seriously. I always seem to want better proofs than are available. This doesn't look like a double standard from inside: I disbelieve in the Singularity only slightly more than I disbelieve in space elevators and fusion power in the near future.

I wonder why you take Tegmark's multiverse seriously. It seems to be the odd one out on your list, an obviously wrong idea. Have they found a workaround for the problem of teacups turning into pheasants?

Comment author: Wei_Dai 14 August 2010 09:58:51AM 6 points [-]

I'm surprised that you weren't aware that I took Tegmark's multiverse seriously, since I mentioned it in the UDT post. It was one of the main inspirations for me coming up with UDT. You can see here a 2006 proto-UDT that's perhaps more clearly based on Tegmark's idea.

Have they found a workaround for the problem of teacups turning into pheasants?

Well, UDT is sort of my answer to that. In UDT you can no longer say "I assign a small probability for observing this teacup turning into a pheasant" but you can still say "I'm willing to bet a large amount of money that this teacup won't turn into a pheasant." See also What are probabilities, anyway? I'm not sure if that answers your question, so let me know.

(You might also be interested in UDASSA, which was an earlier attempt to solve the same problem.)

Comment author: cousin_it 14 August 2010 10:30:43AM *  2 points [-]

This sounds circular to me. Why are you willing to bet a large amount of money that this teacup won't turn into a pheasant? Why do we happen to have a "preference" for a highly ordered world?

Comment author: Wei_Dai 15 August 2010 11:42:21AM 4 points [-]

Why do we happen to have a "preference" for a highly ordered world?

One approach to answering that question is the one I gave here. Another possibility is that there is something like "objective morality" going on. Another one is that our preferences are simply arbitrary and there is no further explanation.

So I think this is still an open question, but there's probably an answer one way or another, and the fact that we don't know what the right answer is yet shouldn't count against Tegmark's idea. Furthermore, I think denying Tegmark's idea only leads to more serious problems, like why does one universe "exist" and not another, and how do we know that one universe exists and not two or three?

Comment author: cousin_it 15 August 2010 02:17:48PM *  0 points [-]

There may be a grain of truth in this kind of theory, but I cannot see it clearly yet. How exactly do you separate statements about the mind ("probability as preference") from statements about the world? What about bunnies, for example? Bunnies aren't very smart, but their bodies seem evolved to make some outcomes more probable than others, in perfect accord with our idea of probability. The same applies to plants, that have no brains at all. Did evolution decide very early on that all life should use our particular "random" concept of preference? (How is it encoded in living organisms, then?) Or do you have some other mechanism in mind?

Comment author: Vladimir_Nesov 15 August 2010 03:20:37PM *  1 point [-]

The shared traits come from shared evolution, that operates in the context of our physics and measure of expected outcomes. The concept of expectation implies evolution (given some other conditions), and evolution in its turn makes organisms that respect the concept of expectation (that is, persist within evolution, get selected).

Comment author: cousin_it 15 August 2010 03:22:47PM *  1 point [-]

If you believe in "measure of expected outcomes", there's no problem. Wei was trying to dissolve that belief and replace it with preference encoded in programs, or something. What do you think about this now?

To make it more pithy: are there, somewhere in the configuration space of our universe, evolved pointy-eared humanoids that can solve NP-complete problems quickly because they don't respect the Born probabilities? Are they immune to "spontaneous existence failure", from their own point of view?

Comment author: Vladimir_Nesov 15 August 2010 03:29:49PM *  2 points [-]

What do you mean by "believe"? To refer to the concept of evolution (as explanation for plants and bunnies), you have to refer to the world, and not just the world, but the world equipped with measure (quantum mechanical measure, say). Without that measure, evolution doesn't work, and the world won't behave as we expect it to behave. After that is understood, it's not surprising that evolution selected organisms that respect that measure and not something else.

So, I'm not assuming measure additionally, the argument is that measure is implicit in your very question.

The NP-solving creatures won't be in our universe in the sense that they don't exist in the context of our universe with its measure. When you refer to our universe, you necessarily reference measure as part. It's like a fundamental law, a necessary part of specification of what you are talking about.

Comment author: cousin_it 15 August 2010 03:44:20PM *  1 point [-]

When you refer to our universe, you necessarily reference measure as part.

Um, no. I don't know of any fundamental dynamical laws in QM that use measure. You can calculate the evolution of the wavefunction without mentioning measure at all. It only appears when we try to make probabilistic predictions about our subjective experience. You could equip the same big evolving wavefunction with a different measure, and get superintelligent elves. Or no?

Comment author: Vladimir_Nesov 15 August 2010 01:54:01PM *  2 points [-]

Why do we happen to have a "preference" for a highly ordered world?

Evolution happened in that ordered world, and it built systems that are expected (and hence, expect) to work in the ordered world, because working in ordered world was the criterion for selecting them in that ordered world in the past. In order to survive/replicate in an ordered world (narrow subset of what's possible), it's adaptive to expect ordered world.

Comment author: Vladimir_Nesov 14 August 2010 10:02:26AM *  0 points [-]

...which seems to be roughly the same "reality is a Darwinian concept" nonsense as what I came up with (do you agree?). You can still assign probabilities though, but they are no longer decision-theoretic probabilities.

Comment author: knb 26 August 2010 06:24:55AM *  6 points [-]

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health.

I'm highly skeptical of these claims of things that are true but predictably make you insane. Are you sure you aren't just coddling yourself, protecting yourself from having to change your mind? More to the point, that sounds like a pretty good memetic evolution to protect current beliefs. "I've always held that X is false. Surely if I came to believe that X was true I would surely go insane or become evil! Therefore X is false!"

Once upon a time I would have thought that accepting the fact that there is no ultimate justice in the universe would drive me insane or lead to depression. Yet I have accepted that fact, and I'm as happy as ever. (Happiness set points are totally unfair, but they're good for some things.)

Comment author: jimrandomh 28 August 2010 06:15:30PM 2 points [-]

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health.

I'm highly skeptical of these claims of things that are true but predictably make you insane. Are you sure you aren't just coddling yourself, protecting yourself from having to change your mind?

I doubt that there are any true things that can predictably make anyone go insane, but something tailored to a specific could. And there are some statement+person-type pairs that seem to reliably damage mental health, so there's at least some real danger there. Of course, these are very rare, so the prior probability for an idea being harmful should be very low, and it shouldn't be considered as a possibility without strong external evidence (such as examples of it happening to other people; but a mere intuitive judgment would not be sufficient evidence).

Comment author: Will_Newsome 17 September 2010 09:11:55AM 3 points [-]

Right, and importantly it's not just clinical insanity that is damaging. When you're working on a hard and important problem, any type of mental irregularity or paralysis is potentially very harmful. I suppose most people don't do the kinds of research where this is a real concern, but some do, and I figured it was important to address the small population of LW that might take one idea too many seriously and end up needlessly paranoid/depressed/etc because of a foolhardy desire to be completely 'rational'. The Litanies of Tarski and Gendlin have exceptions, but those exceptions should not justify excuses. If you do not heed the exceptions you won't be in a state to excuse yourself: the inferential distance would be too dangerous and too large.

Comment author: xamdam 13 August 2010 05:41:23PM *  16 points [-]

What are ideas you think Less Wrong hasn't taken seriously?

I think LW as a whole (but not some individuals) ignored practical issues of cognitive enhancement.

From outside-in:

  • Efficient learning paths. Sequences are great, but there is a lot of stuff to learn from books, and would be great to have dependencies mapped out with the best materials for things like physics, decision theory, logic, CS stuff.

  • Efficient learning techniques: there are many interesting ideas out there, but I do not have time to experiment with them all, such as Supermemo, speed reading.

  • Hardware tools. I feel like I am closer integrated with information with iphone/ipad, if reasonable eyewear comes to market this will be much enhanced.

  • N-back and similar.

  • Direct input via braiwaves/subvocalisation.

  • Pharmacological enhancement.

  • Real BCIs, which are starting to come to market servicing disabled people.

Even if these tools do not lead to Singularity (my guess) they might give edge to FAI researchers.

Comment author: Jonathan_Graehl 13 August 2010 10:04:40PM *  7 points [-]

dual n-back: for the past month, I've spent 2-5 minutes most days on it.

I can do dual 4-back with 95%+ accuracy and 5-back with 60%, and I've likely plateaued (naturally, my skill rapidly improved at first). I enjoy it as "practice focusing on something", but haven't noticed any evidence of any general improvement in memory or other mental abilities. I plan on continuing the habit indefinitely.

Comment author: Will_Newsome 13 August 2010 10:08:43PM 5 points [-]

After doing 100 trials of dual N back stretched over a week (mostly 4 back) I noticed that I felt slightly more conscious: my emotions were more salient, I enjoyed simple things more, and I just felt generally more alive. There were tons of free variables for me, though, so I doubt causation. Did you notice anything similar?

Comment author: Kutta 14 August 2010 06:25:41PM *  5 points [-]

A collection of anecdotal evidence from players is available in Gwern's great n-back FAQ.

I've played for some two months earlier this year and my max level was 8. I haven't really noticed anything, but since I took no tests prior or after the training I can't really say a firm thing about it. The experience of getting better in n-back is exhilarating and bewildering enough that I plan to resume playing it soon. I mean, at the earlier levels I often felt intensely that a certain next level I just got to is physically impossible to beat, and behold, after a few days it seemed manageable, and after a week or so, trivial. All of this without any conscious learning process taking place, or any strategy coalescing. It's an especially unadulterated example how a brain that gets rewired feels from the inside.

Comment author: gwern 25 August 2010 04:59:42AM 5 points [-]

Yes, I know the same feeling (and have remarked on it once or twice on the DNB ML) - it's very strange how over a day or two one can suddenly jump 10 or 20% on a level and have a feeling that eg. suddenly D4B is clear and comprehensible, while before only D3B was and D4B was a murky mystery one had difficulty keeping in one's head.

On the other hand - D8B? Dammit! I've been at n-backing for something like 2 years now, and have been stuck on D4B for months. You, Jonathan, and Will just go straight to D4B or D8B within a few months with ease. I must be doing something wrong.

(On a sidenote, as in the FAQ, I ask people for their negative or null reports as well as their positive ones. This thread is unusual in 2 null reports to 1 positive, but I'm sure there are more LWers who've tried!)

Comment author: steven0461 25 August 2010 07:23:52AM *  1 point [-]

I did maybe 10-15 half-hour sessions of mostly D5B-D6B last year over the course of a few weeks and didn't notice any effects.

Comment author: gwern 25 August 2010 07:56:12AM *  0 points [-]

Thanks; I've added it.

Comment author: Normal_Anomaly 24 November 2011 01:20:10AM 0 points [-]

All the links to your FAQ in this thread are broken. Does the FAQ still exist?

Comment author: gwern 24 November 2011 01:23:52AM 0 points [-]

Oh sure, it's just that I finally built a real website (as opposed to continuing to abuse Haskell.org's free hosting): http://www.gwern.net/DNB%20FAQ

Needless to say, it's been expanded a lot since then.

Comment author: Jonathan_Graehl 13 August 2010 10:14:03PM 4 points [-]

Sure. I tried a bunch of things at once, with the purpose of feeling and thinking better. Collectively, they worked. However, this means that I have probably just acquired a bunch of ungrounded superstitions.

I've recorded what I did but haven't learned anything from that data other than: I am unlikely to ever continue a daily practice of either napping or meditating.

I would speculate that dual n-back is a repetitive and simple enough* stimulus that it's likely to offer whatever "self-awareness" benefits I felt in meditating.

  • it's simple in coarse physical terms; obviously the actual sequences are randomly varied
Comment author: Will_Newsome 13 August 2010 06:55:27PM 2 points [-]

Do you know of a good online resource that I could use to get a fuller picture of the different approaches of cognitive enhancement? I've used Anders Sandberg's page before but I imagine it's rather out of date.

Comment author: xamdam 13 August 2010 07:06:38PM 0 points [-]

Unfortunately no. I suspect Michael Vassar might have some ideas, if you can get him to write something up. Otherwise hoping someone else chips in.

Comment author: Will_Newsome 13 August 2010 07:26:26PM 1 point [-]

I believe he did send me a hugeeeee folder of IA-related papers and such at some point (or Justin Shovelain forwarded it to me). I'll try to find it.

Comment author: xamdam 13 August 2010 07:47:06PM 1 point [-]

Yes, please share what you can. Or forward to znkxurfva@tznvy.pbz if possible.

Comment author: Will_Newsome 13 August 2010 07:58:28PM 1 point [-]

Alright, I can't find what I was looking for, but after the Singularity Summit I'll see what kinds of resources I can get from Vassar and Justin.

Comment author: ciphergoth 14 August 2010 08:39:31AM 7 points [-]

This would be worth a top level post by itself, wouldn't it?

Comment author: simplicio 15 August 2010 02:11:53AM 1 point [-]

I'd appreciate being cut in too! My e-mail is ispollock [at] gmail.com

Comment author: Randaly 15 August 2010 05:52:11PM 0 points [-]

Could you email it to nojustnoperson [at] gmail.com, too?

Comment author: xamdam 15 August 2010 04:58:43PM 0 points [-]

BTW, some of the Englebart-related posts seem relevant here.

Comment author: ChristianKl 15 August 2010 01:03:37PM 4 points [-]

For existential risks we would probably benefit from having a wiki where we list all the risks and everyone can add information. At the moment there doesn't seem to be a space that really center our knowledge on them.

Comment author: Will_Newsome 17 September 2010 09:20:17AM 3 points [-]

Strongly seconded, but is it reasonable to expect this to happen spontaneously? I'm pretty sure SIAI lacks the human resources to do this. Less Wrongers could do this but only if the site was reasonably well-seeded first and had a moderately memorable url, good hosting, et cetera. We have a hard enough time with the LW wiki. Even then it would require at least a few dedicated contributors to avoid falling into decay. The catastrophic risks movement seriously lacks in human capital.

Comment author: amcknight 22 December 2011 09:16:07PM 1 point [-]

I think it would be easier and even more valuable to simply do this on wikipedia. The only downside is that we might not be able to reference as many LessWrong articles and concepts as we might like to.

Comment author: rwallace 13 August 2010 05:38:46PM 26 points [-]

Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.

A decade or thereabouts ago, I read a book called Darwin's Black Box, whose thesis was that while gradual evolution could work for macroscopic features of organisms, it could not explain biochemistry, because the intricate molecular machinery of life did not have viable intermediate stages. The author is a professional biochemist, and it shows; he's really done his homework, and he describes many specific cases in great detail and carefully sets out his reasons for claiming gradual evolution could not have worked.

Oh, and I was able to demolish every one of his arguments in five minutes of armchair thought.

How did that happen? How does a professional put so much into such carefully constructed arguments that end up being so flimsy a layman can trivially demolish them? Well I didn't know anything else about the guy until I ran a Google search just now, but it confirms what I found, and most Less Wrong readers will find, to be the obvious explanation.

If he had only done what most scientists in his position do, and said "I have faith in God," and kept that compartmentalized from his work, he would have avoided a gross professional error.

Of course that particular error could have been avoided by being an atheist, but that is not a general solution, because we are not infallible. We are going to end up taking on some mistaken ideas; that's part of life. You cite the Singularity as your primary example, and it is a good one, for it is a mistaken idea, and one that is immensely harmful if not compartmentalized. But really, it seems unlikely there is a single human being of significant intellect who does not hold at least one bad idea that would cause damage if taken seriously.

We should think long and hard before we throw away safety mechanisms, and compartmentalization is one of the most important ones.

Comment author: jimmy 17 August 2010 04:47:32AM *  7 points [-]

That's the idea behind Reason as memetic immune disorder.

Sure, compartmentalization can protect you from your failures, but it also protects you from your successes.

If you can understand Reason as memetic immune disorder, you should also be able to get to the level of taking this into account. That is, think about how there is a long history of failure to compartmentalize causing failures- a history of people making mistakes, and asking yourself if you're still confident enough to act on it.

Comment author: FAWS 13 August 2010 06:06:53PM 16 points [-]

Compartmentalized ships would be a bad idea if small holes in the hull were very common and no one bothered with fixing them as long as they affected only one compartment.

Comment author: Oscar_Cunningham 13 August 2010 06:34:42PM 15 points [-]

It seems like he had one way decompartmentalisation so that his belife in god was weighing on "science" but not the other way round.

Comment author: timtyler 13 August 2010 08:59:13PM *  0 points [-]

The author was an idiot. I too found the fatal flaw in about five minutes - in a bookshop.

IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

Comment author: [deleted] 14 August 2010 02:41:00PM 9 points [-]

If you don't read creationists, it looks like there aren't any, and it looks like "evolution fans" are banging on about nothing. But, in reality, there are creationists, and they were also banging on in praise of the book. David Klinghoffer, for instance (prominent creationist with a blog.)

Comment author: JoshuaZ 13 August 2010 09:11:20PM *  10 points [-]

IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

Because lots of people (either not as educated or not as intelligent) didn't realize how highly flawed the book was. And when someone is being taken seriously enough that they are an expert witness in a federal trial, there's a real need to respond. Also, there were people like me who looked into Behe's arguments in detail simply because it didn't seem likely that someone with his intelligence and education would say something that was so totally lacking in a point, so the worry was that one was missing something. Of course, there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces. Finally, there's the other irrational aspect that Behe managed to trigger lots of people to react by his being condescending and obnoxious (see for example his exchange with Abbie Smith where he essentially said that no one should listen to her he because he was a prof and she was just a lowly grad student).

Comment author: timtyler 13 August 2010 09:34:16PM *  0 points [-]

Re: "there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces"

I think that was most of it - plus the creationsts were on the other side, and the they got publicly bashed for a long time.

I was left wondering why so many intelligent people wasted so much energy and time on such nonsense for so long.

Dawkins and Dennet have subsequently got into the god bashing. What a waste of talent that is. I call it their "gutter outreach" program.

Comment author: JoshuaZ 14 August 2010 04:28:11PM 7 points [-]

Dawkins and Dennet have subsequently got into the god bashing. What a waste of talent that is. I call it their "gutter outreach" program

Standard beliefs in deities are often connected with a memetic structure that directly encourages irrationalism. Look at the emphasis on "faith" and on mysterious answers. If one is interested in improving rationality, removing the beliefs that directly encourage irrationality is an obvious tactic. Religious beliefs are also responsible for a lot of deaths and resources taken up by war and similar problems. Removing those beliefs directly increases utility. Religion is also in some locations (such as much of the US) functioning as a direct barrier to scientific research and education (creationism and opposition to stem cell research are good examples). Overall, part of why Dawkins has spent so much time dealing with religion seems to be that he sees religion as a major barrier for people actually learning about the interesting stuff.

Finally, note that Dawkins has not just spent time on dealing with religious beliefs. He's criticized homeopathy, dousing, various New Age healing ideas, and many others beliefs.

Comment author: timtyler 14 August 2010 06:43:23PM *  -1 points [-]

I figure those folk should be leading from the front, not dredging the guttering.

Anyone can dispense with the ridiculous nonsense put forth by the religious folk - and they do so regularly.

If anything, Dennet and Dawkins add to the credibility of the idiots by bothering to engage with them.

If the religious nutcases' aim was to waste the time of these capable science writers - and effectively take them out of productive service - then it is probably "mission acomplished" for them.

Comment author: JoshuaZ 15 August 2010 04:31:58PM 4 points [-]

those folk should be leading from the front, not dredging the guttering.

So what would constitute leading from the front in your view?

If the religious nutcases' aim was to waste the time of these capable science writers - and effectively take them out of productive service - then it is probably "mission acomplished" for them.

But there are a lot of science writers now. Carl Zimmer and Rebecca Skloot would be two examples. And the set of people who read about science is not large. If getting people to stop having religious hangups with science will make a larger set of people reading such material how is that not a good thing?

Comment author: timtyler 15 August 2010 04:38:52PM *  0 points [-]

I was much happier with what they were doing before they got sucked into the whirlpool of furious madness and nonsense. Well, "Freedom Evolves" excepted, maybe.

If getting people to stop having religious hangups with science will make a larger set of people reading such material how is that not a good thing?

Your question apparently presumes falsehoods about my views :-(

Comment author: JoshuaZ 15 August 2010 04:46:16PM 0 points [-]

Your question apparently presumes falsehoods about my views :-(

Clarify please? What presumptions am I making that are not accurate?

Comment author: Perplexed 15 August 2010 05:36:15PM 3 points [-]

If I may attempt an interpretation, Tim is saying that the Great Minds should be busy thinking Great Thoughts, and that they should leave the swatting of religious flies to us lesser folk.

Comment author: timtyler 15 August 2010 05:21:59PM *  1 point [-]

Uh, I never claimed that getting people to stop having religious hangups was not a good thing in the first place.

Comment author: AnnaSalamon 17 September 2010 06:03:21PM 1 point [-]

I replied to your comment here.

Comment author: Grognor 22 March 2012 01:25:41PM 0 points [-]

Human thought is by default compartmentalized for the same good reason warships are compartmentalized:

I'm going to ask you to recall your 2010 self now, and ask if you were actually trying to argue for a causal relationship that draws an arrow from the safety of compartmentalization to its existence. This seems wrong. It occurs to me that if you're evolution, and you're cobbling together a semblance of a mind, compartmentalization is just the default state, and it doesn't even occur to you (because you're evolution and literally mindless) to build bridges between parts of the mind.

Comment author: TheOtherDave 22 March 2012 01:54:20PM 0 points [-]

Well, even if we agree that compartmentalized minds were the first good-enough solution, there's a meaningful difference between "there was positive selection pressure towards tightly integrated minds, though it was insufficient to bring that about in the available time" and "there was no selection pressure towards tightly integrated minds" and "there was selection pressure towards compartmentalized minds".

Rwallace seems to be suggesting the last of those.

Comment author: Grognor 22 March 2012 02:06:00PM *  1 point [-]

Point, but I find the middle of your three options most plausible. Compartmentalization is mostly a problem in today's complex world; I doubt it was even noticeable most of the time in the ancestral environment. False beliefs e.g. religion look like merely social, instrumental, tribal-bonding mental gestures rather than aliefs.

Comment author: TheOtherDave 23 March 2012 04:00:25AM 0 points [-]

Yeah, I dunno. From a systems engineering/information theory perspective, my default position is "Of course it's adaptive for the system to use all the data it has to reason with; the alternative is to discard data, and why would that be a good idea?"

But of course that depends on how reliable my system's ability to reason is; if it has failure modes that are more easily corrected by denying it certain information than by improving its ability to reason efficiently with that data (somewhat akin to programmers putting input-tests on subroutines rather than write the subroutine so as to handle that kind of input), evolution may very well operate in that fashion, creating selection pressure towards compartmentalization.

Or, not.

Comment author: Dmytry 23 March 2012 04:19:27AM *  -1 points [-]

What's about facts from environment - is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn't look so great when you have to process data from environment. We even see correlations where there isn't any.

The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.

Comment author: TheOtherDave 23 March 2012 04:38:58AM 0 points [-]

I'm not sure I'm understanding you here.

I agree that if "the crew" (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren't competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits.

In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don't know.

Comment author: Dmytry 23 March 2012 05:52:12AM *  0 points [-]

Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal 'reasoning' you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.

Comment author: TheOtherDave 23 March 2012 02:53:11PM *  0 points [-]

Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I'm either not quite following your reasoning, or wasn't quite clear about mine.

A general-purpose intelligence will, all things being equal, get better results with more data.

But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn't evolve to handle that superset.

In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects... again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.

Comment author: Dmytry 23 March 2012 03:06:12PM *  0 points [-]

The existence of evolved 'modules' within the frontal cortex is not settled science and is in fact controversial. It's indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.

Comment author: Dmytry 22 March 2012 01:34:37PM *  -1 points [-]

The default state, is that anything which is not linked to limb movement or other outputs ever, could as well not exist in the first place.

I think the issue with compartmentalization, is that integration of beliefs is a background process, that ensures coherent response whereby one part of the mind would not come up with one action, and other with another, which would make you e.g. drive a car into a tree if one part of brain wants to turn left and other wants to turn right.

The compartmentalization of information is anything but safe. When you compartmentalize your e.g. political orientation, from your logical thinking, I can make you do either A or B by presenting exact same situation in either political, or logical, way, so that one of the parts activates, and arrives at either action A or action B. That is not safe. That is "it gets you eaten one day" unsafe.

And if you compartmentalize the decision making on a warship, it will fail to coordinate the firing of the guns, and will be sunk, even if it will take more holes. Consider a warship that is being attacked by several enemies. If you don't coordinate the firing of torpedoes, you'll have overkill fire at some of the ships, wasting firepower. You'll be sunk. It is known issue in RTS games. You can beat human with pretty dumb AI if it simply coordinates the fire between units better.

The biologist in this example above is a single cherry picked example, from the majority of scientists, for whom the process has worked correctly, and they stopped believing that God created animals, or have failed to integrate beliefs, and are ticking time bombs wrt producing bad hypotheses. An edge case between atheists and believers, he is.

Comment author: Grognor 22 March 2012 01:53:43PM 0 points [-]

The compartmentalization of information is anything but safe.

I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons.

I would like to emphasize that I agree in most cases. Compartmentalization is bad.

Comment author: Dmytry 22 March 2012 02:04:24PM *  0 points [-]

I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out)

The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.

Comment author: JoshuaZ 13 August 2010 09:00:42PM *  3 points [-]

In the societal level, it leads to a world where almost no attention is paid to existential risks like EMP attacks.

How is an EMP attack an existential risk? EMPs, even large ones, are largely limited by line-of-sight. You can't EMP more than a continent in the most extreme circumstance. Large scale methods of making EMPs are either nukes or flux compression generators. The first provides more direct risk from targeting population centers. The second has a really cool name but isn't very practical and can't produce EMPs as large as a nuke. What am I missing?

Comment author: Will_Newsome 13 August 2010 10:12:16PM *  2 points [-]

I was thinking nuclear EMPs which are very dangerous, but you're right to say they aren't of themselves existential risks; merely catastrophic ones.

Edited post to reflect your criticism.

Comment author: [deleted] 13 August 2010 07:33:01PM *  5 points [-]

Compartmentalization is, in part, an architectural necessity - making sure beliefs are all consistent with each other is an intractable computation problem (I recall reading somewhere that the entire computational capacity of the universe is only sufficient to determine the consistency of, at most, 138 propositions).

Comment author: amcknight 22 December 2011 09:23:06PM 1 point [-]

Working with OWL ontologies and other semantic web technologies eventually makes this very clear. Deductive reasoning is not scalable. But there are probably different levels/types of consistency that could be handled by brains like ours. A simple example: Hueristics that tend to bring to one's attention the most difficult to reconcile beliefs.

Comment author: JohannesDahlstrom 16 August 2010 03:00:22PM *  0 points [-]

In the worst case scenario, with very pathological propositions.

Even though the various important satisfiability problems are known to be in NP, there are known algorithms for those problems that are polynomial-time for almost all "interesting" inputs.

Comment author: JanetK 13 August 2010 05:51:12PM 6 points [-]

I would not question what you are taking seriously and it seems fairly typical of the LW group.

On the other hand, I am surprised that climate change is rarely or never mentioned on LW. The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.

You do not mention the neuroscience revolution but I am sure I have noticed some of the LW group taking it seriously.

This may be the place to mention cryonics without starting another riot. It is boring to me and I do not take it seriously - but I have reasons and I am not judging others who are not in my situation. (1) I had cancer when I was very young, in the olden days when all most everyone diedt. I waited for years for it to reappear. I got used to my mortality. Now I am 70 and quite comfortable with death within the next decade or two. (2) I am very poor, living in a small pension. I could not pay for it if I wanted to. (3) I don't believe that being brought back to life is a technical question only. I think that future generations will not actually value the preserved bodies because they will not value what these people know or can do. They may want some of the famous people alive today but they will not value someone as ordinary as myself. (4) I do not want to be without a body as some disembodied brain or be in a damaged body. I doubt that I would be happy as an immortal.

Comment author: thomblake 13 August 2010 05:53:26PM 7 points [-]

N.B. There is no place on Less Wrong where you can post that you are not signed up for cryonics and not have people immediately pester you about it. Especially if you are old or dying.

Comment author: ciphergoth 14 August 2010 08:40:37AM 5 points [-]

I would not pester someone who was old or dying about cryonics unless I had reason to believe they also had a spare $30,000 in the bank.

Comment author: Vladimir_M 13 August 2010 09:04:25PM *  13 points [-]

JanetK:

The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.

Setting aside the more complex issue of climate change for the moment, I'd like to comment specifically on this part. Frankly, it has always seemed to me that alarmism of this sort is based on widespread popular false beliefs and ideological delusions, and that people here are simply too knowledgeable and rational to fall for it.

When it comes to the "loss of biodiversity," I have never seen any coherent argument why the extinction of various species that nobody cares about is such a bad thing. What exact disaster is supposed to befall us if various exotic and obscure animals and plants that nobody cares about are exterminated? If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?

Regarding the preservation of wild nature in general, it seems to me that the modern fashionable views are based on some awfully biased and ignorant assumptions. People nowadays imagine that wild nature is some delicate and vulnerable system that will collapse like a house of cards as soon as humans touch it. Whereas in reality, wild nature is not only extremely resilient, but also tends to grow and spread extremely fast, and humans in fact have to constantly invest huge amounts of labor just to prevent it from reconquering the spaces they have cleared up to build civilization.

Comment author: wedrifid 14 August 2010 08:09:36AM 4 points [-]

If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?

Me. And I'm not alone. Many humans do value the preservation of significant elements of biodeversity that don't have any 'concrete', objective value. This is arbitrary only in the sense that any terminal value is arbitrary. I suggest that it is not nearly as 'easy' to find someone to preserve a given part of the commons, even when that part would be considered value by the extrapolated volition of the population.

Comment author: teageegeepea 14 August 2010 12:46:53AM 3 points [-]

As George Carlin once said, "The earth will manage just fine. We're the ones who are fucked!". The fact that nature will endure is not that reassuring to any particular apex predator.

I agree though that "biodiversity" needs some backup arguments for us to care about it.

Comment author: KrisC 14 August 2010 07:55:29AM 8 points [-]

Why is biodiversity important?

Protection from disease When there are a variety of species, a single pathogen is less likely able to ravage an ecosystem.

Protecting minority humans A species of negligible value to a dominant society may be of critical value to a marginal society.

Protection of sentient species Some endangered species are capable of learning language. Some humans are not. I typically value worth on a combination of mental traits. Some animals are capable of holding jobs. Some humans are not. Many people often value worth by productivity. Some animals are more valuable than some humans.

Natural history DNA is subject to statistical analysis. This analysis can provide insight into previous environments and the adaptations needed to survive them. Humans may have a future use for a solution already encoded by another species.

Undiscovered potential Most models would place a non-negligible value upon an unknown self-replicating organism that has been adapted to the modelers environment after several million generations. At the very least, identification, classification, and understanding would be attempted before placing a value.

Value from scarcity Economics. As supply decreases, cost increases.

Ethics Treat others as you would have yourself be treated. Don't afflict others with the negative consequences of your actions. Protect the oppressed. Be a good neighbor. Share. Improve your environment for the next visitor. Because you may be judged by the rules you apply to others.

It is good to reconsider our memes, but for me biodiversity passes. I've tried to keep this brief in order to maintain clarity.

Comment author: kodos96 15 August 2010 04:19:35AM *  4 points [-]

Natural history DNA is subject to statistical analysis. This analysis can provide insight into previous environments and the adaptations needed to survive them. Humans may have a future use for a solution already encoded by another species.

Biologists have DNA samples of every known species.

Undiscovered potential Most models would place a non-negligible value upon an unknown self-replicating organism that has been adapted to the modelers environment after several million generations. At the very least, identification, classification, and understanding would be attempted before placing a value.

Ok, but how much value would it place on an organism which wasn't adapted to the modelers environment, as demonstrated by the fact that it was selected against and went extinct?

Protection from disease When there are a variety of species, a single pathogen is less likely able to ravage an ecosystem.

OK, but what reason, other than status quo bias, is there to prefer one result over the other?

Protecting minority humans A species of negligible value to a dominant society may be of critical value to a marginal society.

If so, then protecting that species is in the interests of the human population in question, and it then becomes of question of how best to serve their human interests. But that doesn't get you anywhere as far as biodiversity, in and of itself, having instrumental value.

Value from scarcity Economics. As supply decreases, cost increases.

You probably mean price, not cost... but what does that have to do with anything? We're trying to establish that biodiversity has a utilitarian purpose... how does this address that? If something is useless, who cares how much supply of it there is, or how it's priced?

Ethics Treat others as you would have yourself be treated. Don't afflict others with the negative consequences of your actions. Protect the oppressed. Be a good neighbor. Share. Improve your environment for the next visitor. Because you may be judged by the rules you apply to others.

This is just begging the question

Protection of sentient species Some endangered species are capable of learning language. Some humans are not. I typically value worth on a combination of mental traits. Some animals are capable of holding jobs. Some humans are not. Many people often value worth by productivity. Some animals are more valuable than some humans.

I agree that non human sentient species deserve protection, both because their existence has utility (in understanding the phenomenon of intelligence), and because I consider the protection of sentient life to be a terminal value. But what does that have to do with "biodiversity"?

Comment author: wedrifid 15 August 2010 04:37:52AM 2 points [-]

We're trying to establish that biodiversity has a utilitarian purpose.

We were? Pardon me, my mistake. Please consider anything I wrote on the subject retracted. I'm a conscientious objector to utilitarianism.

Comment author: kodos96 15 August 2010 04:44:39AM 1 point [-]

If biodiversity is a terminal value of yours, then I can absolutely respect that, to exactly the same degree as anybody else's terminal values. But the commenter I was replying to clearly seemed to be arguing that biodiversity has instrumental value.

Comment author: wedrifid 15 August 2010 05:24:05AM *  2 points [-]

But the commenter I was replying to clearly seemed to be arguing that biodiversity has instrumental value.

I reference here only the difference between Utilitarianism and Consequentialism (with the former being often referenced but largely naive).

Come to think of it if 'providing happiness or pleasure as summed among all sentient beings' is actually the measure of instrumental value then you really only need a dozen species of plant and you've got all the 'happiness and pleasure' humans are likely to need.

Comment author: KrisC 15 August 2010 06:18:38AM *  2 points [-]

Thanks for the reply.

Biologists have DNA samples of every known species.

I do not believe that be true. Even if it is, a single sample is insufficient for a meaningful statistical analysis.

Ok, but how much value would it place on an organism which wasn't adapted to the modelers environment, as demonstrated by the fact that it was selected against and went extinct?

Non-negligible, depending on the criteria. It was my belief that human caused environmental destruction was the issue at hand. The organism was adapted for human's natural environment (most of Earth), the environment changed.

OK, but what reason, other than status quo bias, is there to prefer one result over the other?

The current environment supports human life. The recent bee scare was a multi-continent threat to a species very important to our way of life.

You probably mean price, not cost...

Pardon me, I thought I had changed that.

If something is useless, who cares how much supply of it there is, or how it's priced?

People who value money. Valdimir_M wrote: "If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?"

[Ethical arguments are] just begging the question.

I took the creation of Friendly AI to be an ethical consideration which was accepted by all commenters. I think the relationships are parallel.

I agree that non human sentient species deserve protection.... But what does that have to do with "biodiversity"?

I had in mind elephants, primates, and cetaceans. Each of these groups faces existential risks. Maintaining biodiversity is protecting species from extinction. Sentient species are a specific subset.

I was trying to argue for the propagation of the biodiversity meme. I felt that Vladimir_M was contradicting that meme. I thought I was being clear that my argument was not meant to be purely utilitarian (in which case I would have used values, or at least comparisons), but instead to argue that biodiversity has value within a variety of systems.

Comment author: NancyLebovitz 26 August 2010 04:55:37AM 5 points [-]

Biologists have DNA samples of every known species.

I do not believe that be true. Even if it is, a single sample is insufficient for a meaningful statistical analysis.

I believe it's false. Good-sized animals are still being discovered. The ecology of micro-organisms is still being explored.

Ok, but how much value would it place on an organism which wasn't adapted to the modelers environment, as demonstrated by the fact that it was selected against and went extinct?

Non-negligible, depending on the criteria. It was my belief that human caused environmental destruction was the issue at hand. The organism was adapted for human's natural environment (most of Earth), the environment changed.

A lot of what's worth finding out at our present level of knowledge isn't about whole organisms, it's about specific aspects-- consider the work being done with spider silk. Spider silk would probably still be valuable even if there weren't any living spiders.

Comment author: NancyLebovitz 27 August 2010 01:50:25PM 3 points [-]

On the small side, but pea-sized frog recently discovered.

Comment author: XFrequentist 26 August 2010 12:52:16AM *  1 point [-]

When there are a variety of species, a single pathogen is less likely able to ravage an ecosystem.

Interesting, I just had a chat about this hypothesis with a Lyme disease expert. Lyme is apparently held up as the best example for this argument, but field data and mathematical modeling indicate that it isn't true (I could probably dig up the relevant paper if you're interested, but I haven't read it).

I don't know for sure about other zoonotic diseases in wildlife, but I don't think this is certain enough to just be stated as fact.

Your other points seem worthy of consideration, but on the whole it seems the marginal benefit of a member of this crowd worrying about biodiversity, while not utterly negligible, is small.

Comment author: JanetK 14 August 2010 05:50:27AM 1 point [-]

I am not a person who believes in providence, or the market's invisible hand, or the balances that protect democracy, or Gaia. There are systems that are stable for long periods because of massive negative feedback. But that very feedback can turn positive under unusual situations, equalibriums can disappear and systems collapse.

I do not know whether we are going to 'fall over a cliff' and I don't think others do either. We just don't know enough. It is certainly clear to me that we are in danger, just not how much. WE DO NOT KNOW.

The planet has had periods of mass extinction before and has recovered, but the recovered biosphere was very different from the lost one. Technically we are losing species at the sort of rate that appeared during previous mass extinctions. Humans may be the 'dominant life form' that loses out this time.

Comment author: kodos96 15 August 2010 03:43:31AM *  1 point [-]

Technically we are losing species at the sort of rate that appeared during previous mass extinctions.

I don't have a good cite handy, but I've read enough on the subject over the years to say confidently that, technically, no, this is just not the case.

Comment author: JanetK 15 August 2010 10:15:52AM 3 points [-]

Here are some links to numbers and graphs:

http://www.pbs.org/wgbh/evolution/library/03/2/l_032_04.html

http://www.whole-systems.org/extinctions.htmls

http://www.sourcewatch.org/index.php?title=The_Sixth_Great_Extinction

The rate to extremely high and that rate will continue (probably increase) if nothing is done.

Comment author: DilGreen 04 October 2010 10:26:19PM *  0 points [-]

I share the puzzlement of others here that after a post where bioterrorism, cryonics and molecular nanotechnology are listed as being serious ideas that need serious consideration - by implication, to the degree that they might significantly impact upon the shape of one's 'web of beliefs' - that the topics of climate change and mass extinction are given such short shrift, and in terms that, from my point of view, only barely pass muster in the context of a community ostensibly dedicated to increasing rationality and overcoming bias.

I find little rationality and enormous bias in phrases like; "... why the extinction of various species that nobody cares about is such a bad thing".

The ecosystem of the planet is the most complex sub-system of the universe of which we are aware - containing, as it does, among many only partially explored sub-systems, a little over 6 billion human brains.

Given that one defining characteristic of complex systems is that they cannot be adequately modelled without the use of computational resources which exceed the resources of the system itself [colloquially understood as the 'law of unintended consequences'], it seems manifestly irrational to be dismissive of the possible consequences of massive intervention in a system upon which all humans rely utterly for existence.

Whether or not one chooses to give credence to the Gaia hypothesis, it is indisputable that the chemical composition of the atmosphere and oceans are conditioned by the totality of the ecosystem; and that the climate is in turn conditioned largely by these.

Applying probabilistic thinking to the likely impact of bio-terrorism on the one hand, and climate change on the other, we might consider that, um, five people have died as a result of bioterrorism (the work, as it appears, of a single maverick and thus not even firmly to be categorised as terrorism) since the second world war, while climate change has arguably killed tens of thousands already in floods, droughts, and the like, and certainly threatens human habitat as low-lying islands are inundated as sea-levels rise.

Upon these considerations it would appear bizarre to consider expending any energy whatsoever upon bioterrorism before climate change.

Comment author: orthonormal 13 August 2010 06:55:34PM *  3 points [-]

I doubt that I would be happy as an immortal.

If someone figured out today how to reverse and stave off aging, wouldn't you want to give it a try (and wait a while before deciding on mortality)? If so, this isn't a very good objection to cryonics.

Comment author: JanetK 13 August 2010 09:26:50PM 4 points [-]

If I have my own body and it was healthy with some time left, that would be fine. I suppose if we are imagining a surviving brain then what is the problem with getting a re-build to reverse age and whatever caused the death.

Comment author: Larks 24 August 2010 04:43:53AM 1 point [-]

Today, I wish to live one more day.

On any given day, I wish to live one more day.

Therefore, I wish to live forever, by induction on the positive integers.

-Eliezer, and Harry Potter.

Comment author: NancyLebovitz 13 August 2010 06:02:02PM 2 points [-]

Climate change could make a large difference to FAI issues, in addition to matters of biodiversity.

In particular, climate change could make the world a good bit poorer simply because the infrastructure is built around specific expectations about weather, climate, and atmospheric composition. This isn't just the buildings (though that's important and not easily changed), but the seed stocks and specific details of large scale agriculture.

Amateur FAI research is possible because there's a lot of money floating around that doesn't have more urgent uses and isn't under the control of people who mostly use money for conventional status signaling.

Comment author: kodos96 15 August 2010 03:55:27AM 1 point [-]

Climate change could make a large difference to FAI issues

Sudden, drastic climate change, sure. But I'm not aware of any reason to believe we should be expecting that.... certainly not on the kind of time scales that the LW consensus seems to predict for the singularity.

Comment author: kodos96 14 August 2010 08:54:26AM 0 points [-]

climate change is rarely or never mentioned on LW. The lost of biodiversity and the rate of extinction - ditto.

My hypothesis would be that this is due to these issues falling within the Correct Contrarian Cluster

Comment author: JanetK 14 August 2010 09:56:58AM 2 points [-]

I don't understand your comment. Do you mean that climate change and biodiversity are not discussed because everyone in LW thinks the same about them? because there is nothing to say? because there is nothing that can be done? because it is settled science? Please explain how issues falling within the correct contrarian cluster are not discussed at all and why you think that these issues fall within the cluster.

Comment author: kodos96 14 August 2010 07:16:44PM 3 points [-]

Well, I was just speculating - I don't actually have any idea what the LW community in general thinks of the issue. What I was attempting to speculate is that the reason these topics aren't discussed much is because the contrarian/skeptical position on them is clustered with the set of contrarian positions commonly held by LWers, and therefore aren't discussed much since the contrarian position on them is basically that they aren't deserving of much attention, especially relative to the kinds of existential risks LW is concerned with.

I'm not sure how much more detail I can go into on my thinking without violating the "no current politics" rule.

Comment author: JanetK 15 August 2010 12:35:24PM 5 points [-]

Something I should have said in my previous reply. I agree with the "no current politics" rule. My problem is with what is politics - to some everything is and to some almost nothing is. When a subject is a purely scientific one and the disagreement is about whether there is evidence and how to interpret it, then this is a area for rationality. We should be looking at evidence and evaluating it. That does not involve what I would call politics.

Comment author: [deleted] 15 August 2010 02:12:11PM 3 points [-]

When I first got here I thought "existential risk" referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all.

It's possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren't trained for.

If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study.

And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods.

But I don't think SIAI or LessWrong is equipped for that mission.

Comment author: Larks 24 August 2010 04:39:47AM 1 point [-]

I think you're looking for the Future of Humanity Institute and their work on Global Catastrophic Risks

Comment author: JanetK 15 August 2010 10:23:34AM 0 points [-]

It would be nice if people could use some rationality in deciding which ideas to be contrarian on. Maybe I live in an ivory tower but I don't see any connection between biological/environmental dangers and politics.

Comment author: Armok_GoB 17 August 2010 05:04:30PM 3 points [-]

"What are ideas you think Less Wrong hasn't taken seriously?"

The moral status of the models (of others to predict their behaviour, of fictional characters, etc.) made by human brains, especially if there's negative utility in their eventual deletion.

Comment author: [deleted] 23 August 2010 05:57:00PM 0 points [-]

Ideas that should be taken more seriously by Less Wrong:

  1. Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.
  2. The only known tenable way of creating knowledge is by conjectures and refutations.
  3. Induction is a myth.
  4. Theories are either true or false: there is no such thing as the probability that a theory is true.
  5. Confirmation does not make a theory more likely or better supported - the only role of confirmation is to provide a ready stock of criticisms of rival theories.
  6. The most important knowledge is explanations.
  7. There is no route to certain knowledge: we are all fallible.
  8. We don't need certain knowledge to progress: tentative, fallible, knowledge is just fine.
Comment author: Perplexed 23 August 2010 10:24:16PM 6 points [-]

Gee, I wonder what philosopher of science you have been reading. :)

I would suggest that you read through the sequences with an open mind - particularly on your point #4. If you find it impossible to open your mind on that point, then open it to the possibility that the word "probability" can have two different meanings and that your point #4 only applies to one of them. If you find it impossible to open your mind to the possibility that a word might have an alternative meaning which you have not yet learned, then please go elsewhere.

Regarding Popper, it is not so much that he is wrong, as that he is obsolete. We think we have learned that set of lessons and moved on to the next set of problems.

If you have already begun reading the sequences, and were motivated to give us this dose of Popper because Eliezer's naive realism got on your nerves, well ... All I can say is that it got on my nerves too, but if you keep reading you will find that EY is not nearly as epistemologically naive as it might seem in the early sequence postings.

Comment author: [deleted] 23 August 2010 10:48:42PM 0 points [-]

No Popper is not obsolete and clearly the lessons of Popper have not been learnt by many: consider the people who have not yet understood that induction is a myth. Consider, also, the people who constantly misrepresent what Popper said like saying his philosophy is falsificationism or that he was a positivist or that he snuck induction in via the back door (you can find examples of these kind of mistakes discussed here). Popper's ideas are in fact difficult for most people - they blow away the whole justificationist meta-context, a meta-context that permeates most people's thinking. Understanding Popper requires that you take him seriously. David Deutsch did that and expanded on Popper's ideas in a number of ways (you may be interested in a new book he has coming out called "The Beginning of Infinity"). He is another philosopher I follow closely. As is Elliot Temple (www.curi.us).

Comment author: Perplexed 23 August 2010 11:47:48PM 5 points [-]

Thanks for the links and references. I will look into them. I urge you once more to work your way through the sequences. It appears you have something to teach us, but I doubt that you will be very successful until you have learned the local jargon, and become sufficiently familiar with our favorite examples to use them against us.

However, I have to say that I was a bit disconcerted by this:

consider the people who have not yet understood that induction is a myth.

Now if you told me that the standard definition of induction misrepresents the evidence-collection process, or that you know how to dissolve the problem of induction, well then I would be all ears. But when you say that "induction is a myth" I hear that as saying that everyone who has thought seriously on the topic, from Hume to the present, ..., well, you seem to be saying that all those smart people were as deluded as the medieval philosophers who worried about angels dancing on the heads of pins.

See the thing is, I would have to keep having to upvote such arrogance and stupidity, just so the comment to which I am responding doesn't disappear. And I don't want to do that.

Comment author: [deleted] 24 August 2010 05:43:26PM 3 points [-]

You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction". That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas? Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley? They all agree with Popper that:

Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).

Comment author: Perplexed 24 August 2010 07:15:22PM 9 points [-]

You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction".

Of course. That is why I mentioned him.

That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas?

"Exploded". My! What violent imagery. I usually prefer to see problems "dissolved". Less metaphorical debris. And yes, I've read quite a bit of Popper, and admire much of it.

Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley?

Nope, I haven't.

They all agree with Popper that:

Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).

You know, when giving page citations in printed texts, you should specify the edition. My 1965 Harper Torchbook paperback edition does not show Popper saying that on p 70. But, no matter.

One of the few things I dislike about Popper is that he doesn't seem to understand statistical inference. I mean, he is totally clueless on the subject. It is not just that he isn't a Bayesian - it seems he doesn't "get" Pearson and Fisher either. Well, no philosopher gets everything right. But if he really thinks that "inference based on many observations" cannot happen - not just that it is frequently done wrong, but rather that it is impossible - then all I can say is that this is not one of Sir Karl's better moments.

And if what he means is simply that we cannot infer absolute general truths from repeated observations, then I have to call him a liar for suggesting that anyone else ever suggested that we could make such inferences.

But, since you have been recommending philosophers to me, let me recommend some to you. I. J. Good is fun. Richard Jeffrey is not bad either. E.T. Jaynes explains quite clearly how one makes inferences based on observations - one observation or many observations. You really ought to look at Jaynes before coming to this forum to lecture on epistemology.

Comment author: [deleted] 24 August 2010 08:47:28PM *  2 points [-]

Perhaps you should know I have published papers where I have used Bayes extensively. I am well familiar with the topic (edit: though this doesn't make me any kind of infallible authority). I was once enthusiastic about Bayesian epistemology myself. I now see it as sterile. Popperian epistemology - especially as extended by David Deutsch - is where I see fertile ground.

Comment author: Perplexed 24 August 2010 09:15:37PM 9 points [-]

Cool. But more to the point, have you published, or simply written, any papers in which you explain why you now see it as sterile? Or would you care to recommend something by Deutsch which reveals the problems with Bayesianism. Something that actually takes notice of our ideology and tries to refute it will be received here much more favorably than mere diffuse enthusiasm for Popper.

Comment author: [deleted] 24 August 2010 08:09:18PM -2 points [-]

if he really thinks that "inference based on many observations" cannot happen - not just that it is frequently done wrong, but rather that it is impossible - then all I can say is that this is not one of Sir Karl's better moments.

The quote is from 3rd ed. 1968. You say you have read Popper, then you should not be surprised by the quote. Your response above is just the argument from incredulity. Do you have a better criticism?

Comment author: Perplexed 24 August 2010 08:42:04PM 4 points [-]

I'm not surprised by the quote. I just couldn't find it. It apparently wasn't in 2nd edition. But my 2nd edition index had many entries for "induction, myth of _" so I don't doubt you at all that Popper actually said it.

I am incredulous because I know how to do inference based on a single observation, as well as inference based on many. And so does just about everyone who posts regularly at this site. It is called Bayesian inference, and is not really all that difficult. Even you could do it, if you were to simply set aside your prejudice that

Theories are either true or false: there is no such thing as the probability that a theory is true.

I have already provided references. You can find thousands more by Googling.

Comment author: [deleted] 24 August 2010 09:05:53PM 2 points [-]

OK, tell me how you know in advance of having any theory what to observe?

BTW, please don't assume things about me like asserting I hold prejudices. The philosophical position I come from is a full blown one. - it is no mere prejudice. Also, I'm quite willing to change my ideas if they are shown to be wrong.

Comment author: Perplexed 24 August 2010 09:32:38PM 4 points [-]

Ok, I won't assume that you believe, with Popper whom you quote, that inference based on many observations is impossible. I will instead assume that Popper is using the word "inference" very differently than it is used around here. And since you claim to be an ex-Bayesian, I will assume you know how the word is used here. Which makes your behavior up until now pretty inexplicable, but I will make no assumptions about the reasons for that.

Likewise, please do not assume that I believe that observation is neither theory-laden nor theory-directed. As it happens, I do not know in advance of a theory what to observe.

Of course, the natural thing for me to do now would be to challenge you to explain where theories come from in advance of observation. But why don't we both just grow up?

If you have a cite for a careful piece of reasoning which will cause us to drop our Bayesian complaisancy and re-embrace Popper, please provide it and let us read the text in peace.

Comment author: khafra 24 August 2010 06:53:21PM 1 point [-]

A better phrasing for that might have been "certain knowledge is a myth." What cannot be logically justified is reasoning from particular observations to certainty in universal truths. You're commenting as if you are unaware of the positions and arguments linked from my previous reply, and perhaps Where Recursive Justification Hits Bottom . You have intelligent things to say, but you're not going to be taken seriously here if you're not aware of the pre-existing intelligent responses to them popular enough to amount to public knowledge.

Comment author: [deleted] 24 August 2010 07:40:22PM 0 points [-]

A better phrasing for that might have been "certain knowledge is a myth." What cannot be logically justified is reasoning from particular observations to certainty in universal truths.

No, that is not equivalent. Popper wrote that "inference based on many observations is a myth". He is saying that we never reason from observations, never mind reasoning to certainty. In order to observe, you need theories. Without those, you cannot know what things you should observe or even make sense of any observation. Observation enables us to test theories, it never enables us to construct theories. Furthermore, Popper throws out the whole idea of justifying theories. We don't need justification at all to progress. Judging from Where Recursive Justification Hits Bottom, this is something Eliezer has not fully taken on board (though I may be wrong). He sees the problem of the tu-quoque, but he still says [e]verything, without exception, needs justification. No, nothing can be justified. Knowledge advances not positively by justifying things but negatively by refuting things. Eliezer does see the importance of criticism, but my impression is that he doesn't know Popper well enough.

Comment author: timtyler 24 August 2010 08:48:32PM *  2 points [-]

For Yudkowsky on Popper, start here:

"Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning."

...and keep reading - at least as far as:

"On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued."

Comment author: [deleted] 24 August 2010 09:54:20PM 3 points [-]

Yudhowsky gets a lot wrong even in a few sentences:

Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning.

First, Popper's philosophy cannot be accurately described as falsificationism - that is just a component of it, and not the most important component. Popperian philosophy consists of many inter-related ideas and arguments. Yudhowsky makes an error that Popperian newbies make. One suspects from this that Yudhowsky is making himself out to be more familiar with Popper than he actually is. His claim to be dethroning Popper would then be dishonest as he does not have detailed knowledge of the rival position. Also he is wrong that Popper is popular: he isn't. Furthermore, Popper is familiar with Bayesian epistemology and actually discusses it in his books. So calling Popper's philosophy old and making out that Bayesian epistemology is new is wrong also.

Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules;

Popper never said theories can be definitely falsified. He was a thoroughgoing fallibilist and viewed falsifications as fallible conjectures. Also he said that theories can never be confirmed at all, not that they can be partially or probabilistically confirmed, which the above sentence suggests he said. Saying falsification is a special case of the Bayesian rules also doesn't make sense: falsification is anti-induction whereas Bayesian epistemology is pro-induction.

Comment author: [deleted] 25 August 2010 04:41:27PM -1 points [-]

Further comments on Yudhowski's explanation of Bayes:

science itself is a special case of Bayes' Theorem; experimental evidence is Bayesian evidence.

Science revolves around explanation and criticism. Most scientific ideas never get to the point of testing (which is a form of criticism), they are rejected via criticism alone. And they are rejected because they are bad explanations. Why is the emphasis in the quote solely on evidence? If science is a special case of Bayes, shouldn't Bayes have something to say about explanation and criticism? Do you assign probabilities to criticism? That seems silly. Explanations and criticism enable us to understand things and to see why they might be true or false. Trying to reduce things to probabilities is to completely ignore the substance of explanations and criticisms. Instead of trying to get a probability that something is true, you should look for criticisms. You accept as tentatively true anything that is currently unproblematic and reject as tentatively false anything that is currently problematic. It's a boolean decision: problematic or unproblematic.

Comment author: whpearson 25 August 2010 05:00:04PM 5 points [-]

Both bayesian induction (as we currently know it) and Popper fail my test for a complete epistemology.

The test is simple. Can I use the description of the formalism to program a real computer to do science? And it should, in theory, be able to bootstrap itself from no knowledge of science to our level.

Comment author: Perplexed 25 August 2010 05:33:57PM 4 points [-]

I think that the contribution that Bayesian methodology makes toward good criticism of a scientific hypothesis is that to "do the math", you need to be able to compute P(E|H). If H is a bad explanation, you will notice this when you try to determine (before you see E) how you would go about computing P(E|H). Alternately, you discover it when you try to imagine some E such that P(E|H) is different from P(E|not H).

No, you don't assign probabilities to criticisms, as such. But I do think that every atomic criticism of a hypothesis H contains at its heart a conditional proposition of the form (E|H) or else a likelihood odds ratio P(E|H)/P(E|not H) together with a challenge, "So how would you go about calculating that?"

Incidentally, you also ought to look at some of the earlier postings where EY was, in effect, using naive Bayes classifiers to classify (i.e. create ontologies), rather than using Bayes's theorem to evaluate hypotheses that predict. Also take a look at Pearl's book to get a modern Bayesian view of what explanation is all about.

Comment author: timtyler 25 August 2010 05:13:37PM *  3 points [-]

Instead of trying to get a probability that something is true, you should look for criticisms.

If you were asked to bet on whether it was true or not, then you should assign a probability.

Scientists often do something like that when deciding how to allocate their research funds.

Comment author: [deleted] 25 August 2010 05:01:48PM 1 point [-]

I like this point a lot. But it seems very convenient and sensible to say that some things are more problematic than others. And at least for certain kinds of claims it's possible to quantify how problematic they are with numbers. This leads one (me at least) to want a formalism -- for handling beliefs -- that involves numbers, and Bayesianism is a good one.

What's the conjectures-and-refutations way of handling claims like "it's going to snow in February"? Do you think it's meaningless or useless to attach a probability to that claim?

Comment author: timtyler 25 August 2010 05:08:50PM 0 points [-]

More from Yudkowsky on the philosophy of science:

http://lesswrong.com/lw/ig/i_defy_the_data/

Comment author: timtyler 25 August 2010 05:00:27PM *  0 points [-]

The chance of a criticism being correct can unproblematically be assigned a probability.

Comment author: timtyler 24 August 2010 09:01:55PM 0 points [-]

Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).

Popper obviously hadn't read Wikipedia:

http://en.wikipedia.org/wiki/Inductive_reasoning

Comment author: Larks 23 August 2010 11:15:45PM *  2 points [-]

Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.

In what sense do you mean this exactly, and what evidence for it do you have? I've spoken to people like Elliot, but all they said was things like 'humans can function as a Turing Machine by laboureously manipulating symbols'. Which is nice, but not really relevant to anything in real-time.

On a more general note, you should probably try to be a little clearer: 'conjectures and refutations' doesn't really pick out any particular strategy from strategy-space, and neither does the phrase 'explanation' pick out anything in particular. Additionally, 'induction' is sufficiently different from what people normally think of as myths that it could do with some elaboration.

Similarly, some of these issues we do take seriously; we know we're fallible, and it sounds like you don't know what we mean by probability.

Finally, welcome to Less Wrong!

Edit: People, don't downvote the parent; there's no reason to scare the newbies.

Comment author: wedrifid 24 August 2010 12:51:26AM *  2 points [-]

Which is nice, but not really relevant to anything in real-time.

Where 'real-time' can be taken literally to refer to time that is expected to exist in physics models of the universe.

Comment author: [deleted] 24 August 2010 05:18:22PM -2 points [-]

Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.

In what sense do you mean this exactly,

Another way of saying it is that human beings can solve any problem that can be solved. Does that help?

and what evidence for it do you have?

Careful here - as I mentioned above, evidence never supports a theory, it just provides a ready stock of criticisms of rival theories. Let me give you an argument: If you hold that human beings are not universal knowledge creators, then you are saying that human knowledge creation processes are limited in some way, that there is some knowledge we cannot create. You are saying that humans can create a whole bunch of knowledge but whole realms of other knowledge are off limits to us. How does that work? Knowledge enables us to expand our abilities and that in turn enables us to create new knowledge and so on. Whatever this knowledge we can't create is, it would have to be walled off from all this other expanding knowledge in a rather special way. How do you build a knowledge creation machine that only has the capability to create some knowledge? That would seem much much more difficult than creating a fully universal machine.

I've spoken to people like Elliot, but all they said was things like 'humans > can function as a Turing Machine by laboureously manipulating symbols'. > Which is nice, but not really relevant to anything in real-time.

I don't know what point Elliot was answering here, but I guess he is saying that humans are universal Turing Machines and illustrating that. He is saying that humans are universal in the sense that they can compute anything that can be computed. That is a different notion of universality to the one under discussion here (though there is a connection between the two types of universality). Elliot agrees that humans are universal knowledge creators and has written a lot about it (see, for example, his posts on The Fabric of Reality list).

On a more general note, you should probably try to be a little clearer: 'conjectures and refutations' doesn't really pick out any particular strategy from strategy-space,

'Conjectures and refutations' is an evolutionary process. The general methodology (or strategy, if you prefer) is: When faced with a problem try to come up with conjectural explanations to solve the problem and then criticise them until you find one (and only one) that cannot be knocked down by any known criticism. Take that as your tentative solution. I guess what you are looking for is an explanation of how human conjecture engines work? That is an unsolved problem. We do know some things, eg: no induction is involved.

and neither does the phrase 'explanation' pick out anything in particular.

Explanations are valuable: they help you understand something. Are you looking for an explanation of how we generate "explanations"? Again, unsolved problem.

Additionally, 'induction' is sufficiently different from what people normally think of as myths that it could do with some elaboration.

It's not really different. It's something that people believe is true that in fact isn't. Hume was the first to realize that there was a "problem of induction" and philosophers have for years and years been trying to justify induction. It took Karl Popper to realize that induction isn't actually how we create knowledge at all: induction is a myth.

Similarly, some of these issues we do take seriously; we know we're fallible,

Yes, you are called "Less Wrong" after all! I was off-beam with that.

and it sounds like you don't know what we mean by probability.

Actually, I am quite familiar with the Bayesian conception of probability. I just don't think probability has a role in the realm of epistemology. Evidence does not make a theory more probable, not even from a subjective point of view. What evidence does, as I have said, is provide a stock of criticisms against rival theories. Also, evidence only goes so far: what really matters is how theories stand up to criticism as explanations. Evidence plays a role in that. I am quite happy to talk about the probability of events in the world, but events are different from explanatory theories. Apples and oranges.

Comment author: timtyler 28 August 2010 09:26:06PM *  1 point [-]

Actually, I am quite familiar with the Bayesian conception of probability. I just don't think probability has a role in the realm of epistemology. Evidence does not make a theory more probable, not even from a subjective point of view.

Of course evidence makes theories more probable:

Imagine you have two large opaque bags full of beans, one 50% black beans and 50% white beans and the other full of white beans. The bags are well shaken, you are given one bag at random. You take out 20 beans - and they are all white.

That is clearly evidence that confirms the hypothesis that you have the bag full of white beans. If you had the "mixed" bag, that would only happen one time in a million.

Comment author: [deleted] 28 August 2010 10:16:22PM 1 point [-]

Notice that the counterfactual event is possible (that you have the mixed bag). And even if you hold the bag full of white beans, the counterfactual event that you hold the mixed beans does occur elsewhere in the multiverse. This is what distinguishes events from theories. A false theory never obtains anywhere: it is simply false. So a theory being true or false is not at all like the situation with counterfactual events. You cannot assign anything objective to a false theory.

The actual theory you hold in your example is approximately the following: I have made a random selection from a bag and I know that I have been given one of two bags: one 50% black beans and 50% white beans and the other full of white beans and: I have been honestly informed about the setup, am not being tricked, no mistakes have been made etc. This theory predicts that if I take 20 white beans out of the bag, then the chance of that would be one in a million if I had the mixed bag. Do you understand? The real situation is that you have a theory that is making probabilistic predictions about events and, as I have said several times, I have no problem with probabilistic predictions of theories about events.

Comment author: timtyler 29 August 2010 06:49:30AM *  3 points [-]

As briefly as possible:

Firstly, this seems like a step forwards to me. You seem to agree that induction and confirmation are fine 90% of the time. You seem to agree that these ideas work in practice - and are useful - including in some realms of knowledge - such as knowledge relating to which bag is in front of you in the above example. This puts your anti-induction and anti-confirmation statements into a rather different light, IMO.

Confirmation theory has nothing to do with multiverses. It applies equally well for agents in single deterministic universes - such as can be modelled by cellular automata. So, reasoning that depends on the details of multiverse theories is broken from the outset. Imagine evidence for wavefunction collapse was found. Not terribly likely - but it could happen - and you don't want your whole theory of epistemology to break if it does!

Treating uncertainty about theories and uncertainty about events differently is a philosophical mistake. There is absolutely no reason to do it - and it gets people into all kinds of muddles.

We have a beautiful theory of subjective uncertainty that applies equally well to uncertainty about any belief - whether it refers to events, or scientific theories. You can't really tease these categories apart anyway - since many events are contingent upon the truth of scientific theories - e.g. Higgs boson observations. Events are how physical law is known to us.

Instead of using one theory for hypotheses about events and another for hypotheses about universal laws you should - according to Occam's razor - be treating them in the same way - and be using the same underlying general theory that covers all uncertain knowledge - namely the laws of subjective probability.

"Bayesian Confirmation Theory"

http://plato.stanford.edu/entries/epistemology-bayesian/#BayTheBayConThe

Comment author: [deleted] 29 August 2010 10:34:03AM *  1 point [-]

Tim - In the example we have been discussing, no confirmation of the actual theory (the one I gave in approximate outline) happens. The actual theory makes probabilistic predictions about events (it also makes non-probabilistic predictions) and tells you how to bet. Getting 20 white beans doesn't make the actual theory any more probable - the probability was a prediction of the theory. Note also that a theory that you are being tricked might recommend that you choose the mixed bag when you get 20 white beans. Lots of theories are consistent with the evidence. What you need to look for is things to refute the possible theories. If you are concerned with confirmation, then the con man wins.

So I am not agreeing that induction and confirmation are fine any percentage of the time (how did you get that 90% figure?). When you consider the actual possible theories of the example, all that is happening is that you have explanatory theories that make predictions, some probabilistic, and that tell you how to bet. The theories are not being induced from evidence and no confirmation takes place.

You haven't explained how we assign objective probabilities to theories that are false in all worlds.

Comment author: khafra 29 August 2010 05:39:14PM 3 points [-]

What you need to look for is things to refute the possible theories. If you are concerned with confirmation, then the con man wins.

What you're talking about here is a strategy for avoiding bias which Bayesians also use. It is not a fundamental feature of any particular epistemology.

Comment author: Pavitra 29 August 2010 05:13:10PM 5 points [-]

You haven't explained how we assign objective probabilities to theories that are false in all worlds.

We don't assign objective probabilities, full stop.

Comment author: timtyler 29 August 2010 12:08:56PM *  1 point [-]

I think you are too lost for me :-(

You don't seem to address the idea that multiverse theories are an irrelevance - and that in a single deterministic automaton, things work just the same way.

Indeed, scientists don't even know which (If any) laws of physics are true everywhere, and which depend on the world you are in.

You don't seem to address the idea that we have a nice general theory that covers all kinds of uncertainty, and that no extra theory to deal with uncertainty about scientific hypotheses is needed.

If you don't class hypotheses about events as being "theories", then I think you need to look at:

http://en.wikipedia.org/wiki/Scientific_theory

Also, your challenge doesn't seem to make much sense. The things people assign probabilities to are things they are uncertain about. If you tell me a theory is wrong, it gets assigned a low probability. The interesting cases are ones where we don't yet know the answer - like the clay theory of the origin of life, the orbital inclination theory of glacial cycles - and so on.

Distinguishing between scientific theories and events in the way that you do apparently makes little sense. Events depend on scientific theories. Scientific theories predict events. Every test of a scientific theory is an event. Observing the perihelion precession of Mercury was an event. The observation of the deflection of light by the Sun during an eclipse was an event. If you have probabilities about events which are tests of scientific theories, then you automatically wind up with probabilities about the theories that depend on their outcome.

Basically agents have probabilities about all their beliefs. That is Bayes 101. If an agent claims not to have a probability about some belief, you can usually set up a bet which reveals what they actually think about the subject. Matters of fundamental physics are not different from "what type of beans are in a bag" - in that respect.

Comment author: [deleted] 29 August 2010 01:45:28PM *  2 points [-]

Scientific theories predict events. Every test of a scientific theory is an event. Observing the perihelion precession of Mercury was an event. The observation of the deflection of light by the Sun during an eclipse was an event.

Yes, scientific theories predict events. So there is a distinction between events and theories right? If the event is observed to occur, all that happens is that rival theories that do not predict the event are refuted. The theory that predicted the event is not made truer (it already is either true or false). And there are always an infinite number of other theories that predict the same event. So observing the event doesn't allow you to distinguish among those theories.

In the bean bag example you seem to think that the rival theories are "the bag I am holding is mixed" and "the bag I am holding is all white". But what you actually have is a single theory that makes predictions about these two possible events. That theory says you have a one-in-a-million chance of holding the mixed bag.

Matters of fundamental physics are not different from "what type of beans are in a bag"

No, General Relativity being true or false is not like holding a bag of white beans or holding a bag of mixed beans. The latter are events that can and do obtain: They happen. But GR is not true in some universes and false in others. It is either true or false. Everywhere. Furthermore, we accept GR not because it is judged most likely but because it is the best explanation we have.

Popperians claim that we don't need any theory of uncertainty to explain how knowledge grows: uncertainty is irrelevant. That is an interesting claim don't you think? And if you care about the future of humanity, it is a claim that you should take seriously and try to understand.

If you are still confused about my position, why don't you try posting some questions on one of the following lists:

http://groups.yahoo.com/group/Fabric-of-Reality/

http://groups.yahoo.com/group/criticalrationalism/

It might be useful for other Popperians to explain the position - perhaps I am being unclear in some way.

Edit: Just because people might be willing to place bets is no argument that the epistemological point I am making is wrong. What makes those people infallible authorities on epistemology? Also, if I accept a bet from someone that a universal theory is true, would I ever have to pay out?

Comment author: Larks 25 August 2010 12:57:04AM 1 point [-]

Another way of saying it is that human beings can solve any problem that can be solved. Does that help?

What about the problem of building pyramids on alpha-centuri by 2012? We can't, but aliens living there could.

More pressingly though, I don't see why this is important. Have we been basing our arguments on an assumption that there are problems we can't solve? Is there any evidence we can solve all problems without access to arbitrarily large amounts of computational power? Something like AIXI can solve pretty much anything, but not relevantly.

That would seem much much more difficult than creating a fully universal machine.

How about a neural network that can't learn XOR?

When faced with a problem try to come up with conjectural explanations to solve the problem and then criticise them until you find one (and only one) that cannot be knocked down by any known criticism.

The manner in which explanations are knocked down seems under-specified, if you're not doing Bayesian updating.

Are you looking for an explanation of how we generate "explanations"? Again, unsolved problem.

Nope, I just don't know what in particular you mean by 'explanation'. I know what the word means in general, but not your specific conception.

I just don't think probability has a role in the realm of epistemology.

Well, that's different from there being no such thing as a probability that a theory is true: your initial assertion implied that the concept wasn't well defined, whereas now you just mean it's irrelevant. Either way, you should probably produce some actual arguments against Jaynes's conception of probability.

Meta: You want to reply directly to a post, not its descendants, or the other person won't get a notification. I only saw your post via the Recent Posts list.

Also, it's no good telling people that they can't use evidence to support their position because it contradicts your theory when the other people haven't been convinced of your theory.

Comment author: [deleted] 28 August 2010 07:05:15PM *  1 point [-]

The manner in which explanations are knocked down seems under-specified, if you're not doing Bayesian updating.

Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?

In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn't it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities - a task that is surely fraught with difficulty for any realistic idea of criticism? You're adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.

I just don't know what in particular you mean by 'explanation'. I know what the word means in general, but not your specific conception.

My conception is the same as the general one.

Comment author: Larks 11 September 2010 07:27:39PM 1 point [-]

You don't seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you're essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god - then obviously it would be a problem if we didn't bare in mind the stuff. But without providing evidence that we succumb to these faults, it's hard to see what the problem is.

Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn't increase subjective plausibility, these would be important claims.

Comment author: [deleted] 17 November 2010 12:40:38PM *  0 points [-]

Elliot has informed me that he doesn't think he said: "humans can function as a Turing Machine by laboriously manipulating symbols", except possibly in reply to a very specific question like "Give a short proof that humans have computational universality".

Why do you say "people like Ellliot"? Elliot has his own views on things and shouldn't be conflated with people who you think are like him. It seems to me you don't understand his ideas so wouldn't know what the people who are like him are like.

Comment author: thomblake 23 August 2010 06:08:21PM *  2 points [-]

Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.

For interesting definitions of 'can', perhaps. I know some humans who can't create much of anything.

The only known tenable way of creating knowledge is by conjectures and refutations.

I'm not sure that counts as a 'way of creating knowledge'. 'Conjectures' sounds to me like a black box which would itself contain the relevant bit.

Induction is a myth.

I'd want to know what you mean by 'myth'. It's worked so far, though that only counts as evidence for those of us blinded by the veil of Maya.

Theories are either true or false: there is no such thing as the probability that a theory is true.

Confirmation does not make a theory more likely or better supported - the only role of confirmation is to provide a ready stock of criticisms of rival theories.

Probability is in the mind. Theories are either true or false, and there is such a thing as the probability that a theory is true.

The most important knowledge is explanations.

I'm not sure what you mean by that.

There is no route to certain knowledge: we are all fallible.

This shows the remarks about 'probability' above to be merely a definitional dispute. Probability describes uncertainty, and you admit that we have uncertain knowledge.

We don't need certain knowledge to progress: tentative, fallible, knowledge is just fine.

True that!

Welcome to Less Wrong

ETA: Reminder that we have a rough community norm against downvoting first posts when they seem to be in good faith.

Comment author: [deleted] 23 August 2010 09:39:24PM 0 points [-]

Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.

For interesting definitions of 'can', perhaps. I know some humans who can't create much of anything.

All human beings create knowledge - masses of it. Certain ideas can and do impair a person's creativity, but it is always possible to learn and to change one's ideas.

The only known tenable way of creating knowledge is by conjectures and refutations.

I'm not sure that counts as a 'way of creating knowledge'. 'Conjectures' sounds to me like a black box which would itself contain the relevant bit.

It's not just conjectures, it's "conjectures and refutations". Knowledge is created by advancing conjectural explanations to solve a problem and then criticizing those conjectures in an attempt to refute them. The goal is to find a conjecture that can withstand all criticisms we can think of and to refute all rival conjectures.

Induction is a myth.

I'd want to know what you mean by 'myth'. It's worked so far, though that only counts as evidence for those of us blinded by the veil of Maya.

No, it never worked. Not a bit. That's what I mean by myth.

Theories are either true or false: there is no such thing as the probability that a theory is true.

Confirmation does not make a theory more likely or better supported - the only role of confirmation is to provide a ready stock of criticisms of rival theories.

Probability is in the mind. Theories are either true or false, and there is such a thing as the probability that a theory is true.

Theories are objective. Whether you think a theory is true or false has no bearing on whether it is in fact true or false. Moreover, how do you assign a probability to a complex real-world theory like, say, multiversal quantum mechanics? What counts is whether the theory has stood up to criticism as an explanation to a problem or set of problems. If it has, who cares about how probable you think it is? It's not the probability that you should care about, it's the explanation.

The most important knowledge is explanations.

I'm not sure what you mean by that.

Above all else, we should try to find explanations for things; explanations are the most important kind of knowledge.

There is no route to certain knowledge: we are all fallible.

This shows the remarks about 'probability' above to be merely a definitional dispute. Probability describes uncertainty, and you admit that we have uncertain knowledge.

Knowledge is always uncertain, yes, but it is impossible to objectively quantify the uncertainty. Put another way, you cannot know what you do not yet know. Theories can be wrong in all sorts of ways but you have no way of doing in advance how or if a theory will go wrong. It's not a definitional dispute.

We don't need certain knowledge to progress: tentative, fallible, knowledge is just fine.

True that!

OK, we agree on that!

Comment author: khafra 23 August 2010 10:04:55PM 2 points [-]

Probability is subjectively objective. All conjectures/models are wrong, but some are useful to the extent that they successfully constrain expected experience.

Comment author: [deleted] 23 September 2010 01:27:38AM *  1 point [-]
  • Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).

In what areas are these implications? In particular, what are the implications for existential risk reduction?

I recently read "The Mathematical Universe" and this post but so far I haven't had any earth-shattering insights. Should I re-read the posts on UDT?

Comment author: Will_Newsome 28 September 2010 07:20:44AM 2 points [-]

In particular, what are the implications for existential risk reduction?

We could be getting most of our measure from all sorts of places, which means that maybe a very small proportion of our measure is actually at stake. If all computations exist, some of those computations have preferences over other computations that include us. It might be good to understand such preferences. That in itself has many implications. But I guess I'd say that it's easy to go funny in the head when thinking about things like that, so be careful.

Comment author: ChristianKl 15 August 2010 01:01:21PM 0 points [-]

Reincarnation. It's a certral feature of randomness that events repeat if you simple have enough time.

If we live in a purely random multiverse which big bangs due to quantum fluctuations every 10^10^X years, given enough time we will be reborn after we die. Sure, most of the time you won't remember, but if you wait long enough you will get reincarnated atom-by-atom.

Comment author: Nick_Tarleton 15 August 2010 08:42:02PM 4 points [-]

What does taking this seriously imply?

Comment author: CronoDAS 15 August 2010 09:33:20PM 5 points [-]

Probably nothing.