I, the author, no longer endorse this post.


 

Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.

 

Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.

The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).

Taking an idea seriously means:

  • Looking at how a new idea fits in with your model of reality and checking for contradictions or tensions that may indicate the need of updating a belief, and then propagating that belief update through the entire web of beliefs in which it is embedded. When a belief or a set of beliefs change that can in turn have huge effects on your overarching web of interconnected beliefs. (The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.) Failing to propagate that change leads to trouble. Compartmentalization is dangerous.
  • Noticing when an idea seems to be describing a part of the territory where you have no map. Drawing a rough sketch of the newfound territory and then seeing in what ways that changes how you understand the parts of the territory you've already mapped.
  • Not just examining an idea's surface features and then accepting or dismissing it. Instead looking for deep causes. Not internally playing a game of reference class tennis.
  • Explicitly reasoning through why you think the idea might be correct or incorrect, what implications it might have both ways, and leaving a line of retreat in both directions. Having something to protect should fuel your curiosity and prevent motivated stopping.
  • Noticing confusion.
  • Recognizing when a true or false belief about an idea might lead to drastic changes in expected utility.

There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:

  • Existential risks and the possibilities for methods of prevention thereof.
  • Molecular nanotechnology.
  • The technological singularity (especially timelines and planning).
  • Cryonics.
  • World economic collapse.

Some potentially important ideas that I readily admit to not yet having taken seriously enough:

  • Molecular nanotechnology timelines.
  • Ways to protect against bioterrorism.
  • The effects of drugs of various kinds and methodologies for researching them.
  • Intelligence amplification.

And some ideas that I did not immediately take seriously when I should have:

  • Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).
  • The subjective for-Will-Newsome-personally irrationality of cryonics.1
  • EMP attacks.
  • Updateless-like decision theory and the implications thereof.
  • That philosophical and especially metaphysical intuitions are not strong evidence.
  • The idea of taking ideas seriously.
  • And various things that I probably should have taken seriously, and would have if I had known how to, but that I now forget because I failed to grasp their gravity at the time.

I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.

Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.

Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2

What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?

 


I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.

2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany: 

If believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.

New Comment
260 comments, sorted by Click to highlight new comments since: Today at 4:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I, the author, no longer endorse this post.

Why? Did Will ever explain this?

Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.

A decade or thereabouts ago, I read a book called Darwin's Black Box, whose thesis was that while gradual evolution could work for macroscopic features of organisms, it could not explain biochemistry, because the intricate molecular machinery of life did not have viable intermediate stages. The author is a professional biochemist, and it shows; he's really done his homework, and he describes many specific cases in great detail and carefully sets out his reasons for claiming gradual evolution could not have worked.

Oh, and I was able to demolish every one of his arguments in five minutes of armchair thought.

How did that happen? How does a professional put so much into such carefully constructed arguments that end up being so flimsy a layman can trivially demolish them? Well I didn't know anything else about the guy until I ran a Google search just now, but it confirms what I found, and most Less Wrong readers will find, to be the obvious explanation.

If he had only done what most scientists in his position do, and said "I have faith in God," a... (read more)

Compartmentalized ships would be a bad idea if small holes in the hull were very common and no one bothered with fixing them as long as they affected only one compartment.

It seems like he had one way decompartmentalisation so that his belife in god was weighing on "science" but not the other way round.

9Grognor12y
I'm going to ask you to recall your 2010 self now, and ask if you were actually trying to argue for a causal relationship that draws an arrow from the safety of compartmentalization to its existence. This seems wrong. It occurs to me that if you're evolution, and you're cobbling together a semblance of a mind, compartmentalization is just the default state, and it doesn't even occur to you (because you're evolution and literally mindless) to build bridges between parts of the mind.
0TheOtherDave12y
Well, even if we agree that compartmentalized minds were the first good-enough solution, there's a meaningful difference between "there was positive selection pressure towards tightly integrated minds, though it was insufficient to bring that about in the available time" and "there was no selection pressure towards tightly integrated minds" and "there was selection pressure towards compartmentalized minds". Rwallace seems to be suggesting the last of those.
2Grognor12y
Point, but I find the middle of your three options most plausible. Compartmentalization is mostly a problem in today's complex world; I doubt it was even noticeable most of the time in the ancestral environment. False beliefs e.g. religion look like merely social, instrumental, tribal-bonding mental gestures rather than aliefs.
0TheOtherDave12y
Yeah, I dunno. From a systems engineering/information theory perspective, my default position is "Of course it's adaptive for the system to use all the data it has to reason with; the alternative is to discard data, and why would that be a good idea?" But of course that depends on how reliable my system's ability to reason is; if it has failure modes that are more easily corrected by denying it certain information than by improving its ability to reason efficiently with that data (somewhat akin to programmers putting input-tests on subroutines rather than write the subroutine so as to handle that kind of input), evolution may very well operate in that fashion, creating selection pressure towards compartmentalization. Or, not.
-2Dmytry12y
What's about facts from environment - is it good to gloss over applicability of something that you observed in one context, to other context? The compartmentalization may look like good idea when you are spending over a decade to put the effective belief system into children. It doesn't look so great when you have to process data from environment. We even see correlations where there isn't any. The information compartmentalization may look great if the crew of the ship is to engage in pointless idle debates over intercom. Not so much when they need to coordinate actions.
0TheOtherDave12y
I'm not sure I'm understanding you here. I agree that if "the crew" (that is, the various parts of my brain) are sufficiently competent, and the communications channels between them sufficiently efficient, then making all available information available to everyone is a valuable thing to do. OTOH, if parts of my brain aren't competent enough to handle all the available information in a useful way, having those parts discard information rather than process it becomes more reasonable. And if the channels between those parts are sufficiently inefficient, the costs of making information available to everyone (especially if sizable chunks of it are ultimately discarded on receipt) might overcome the benefits. In other words, glossing over the applicability of something I observed in one context to another context is bad if I could have done something useful by non-glossing over it, and not otherwise. Which was reliably the case for our evolutionary predecessors in their environment, I don't know.
0Dmytry12y
Well, one can conjecture the counter productive effects of intelligence in general and any aspects of it in particular, and sure there were a few, but it stands that we did evolve the intelligence. Keep in mind that without highly developed notion of verbal 'reasoning' you may not be able to have the ship flooded with abstract nonsense in the first place. The stuff you feel, it tracks the probabilities.
0TheOtherDave12y
Can you clarify the relationship between my comment and counterproductive effects of intelligence in general? I'm either not quite following your reasoning, or wasn't quite clear about mine. A general-purpose intelligence will, all things being equal, get better results with more data. But we evolved our cognitive architecture not in the context of a general-purpose intelligence, but rather in the context of a set of cognitive modules that operated adaptively on particular sets of data to perform particular functions. Providing those modules with a superset of that data might well have gotten counterproductive results, not because intelligence is counterproductive, but because they didn't evolve to handle that superset. In that kind of environment, sharing all data among all cognitive modules might well have counterproductive effects... again, not because intelligence is counterproductive, but because more data can be counterproductive to an insufficiently general intelligence.
0Dmytry12y
The existence of evolved 'modules' within the frontal cortex is not settled science and is in fact controversial. It's indeed hard to tell how much data do we share, though. Maybe without habit of abstract thought, not so much. On other hand the data about human behaviours seem important.
-2Dmytry12y
The default state, is that anything which is not linked to limb movement or other outputs ever, could as well not exist in the first place. I think the issue with compartmentalization, is that integration of beliefs is a background process, that ensures coherent response whereby one part of the mind would not come up with one action, and other with another, which would make you e.g. drive a car into a tree if one part of brain wants to turn left and other wants to turn right. The compartmentalization of information is anything but safe. When you compartmentalize your e.g. political orientation, from your logical thinking, I can make you do either A or B by presenting exact same situation in either political, or logical, way, so that one of the parts activates, and arrives at either action A or action B. That is not safe. That is "it gets you eaten one day" unsafe. And if you compartmentalize the decision making on a warship, it will fail to coordinate the firing of the guns, and will be sunk, even if it will take more holes. Consider a warship that is being attacked by several enemies. If you don't coordinate the firing of torpedoes, you'll have overkill fire at some of the ships, wasting firepower. You'll be sunk. It is known issue in RTS games. You can beat human with pretty dumb AI if it simply coordinates the fire between units better. The biologist in this example above is a single cherry picked example, from the majority of scientists, for whom the process has worked correctly, and they stopped believing that God created animals, or have failed to integrate beliefs, and are ticking time bombs wrt producing bad hypotheses. An edge case between atheists and believers, he is.
0Grognor12y
I agree in most cases; however, there are some cases where ideas are very Big and Scary and Important where a full propagation through your explicit reasoning causes you to go nuts. This has happened to multiple people on Less Wrong, whom I will not name for obvious reasons. I would like to emphasize that I agree in most cases. Compartmentalization is bad.
0Dmytry12y
I think it happens due to ideas being wrong and/or being propagated incorrectly. Basically, you would need extremely high confidence in a very big and scary idea, before it can overwrite anything. The MWI is very big and scary. Provisionally, before I develop moral system based on MWI, it is perfectly consistent to assume that it has probability of being wrong, q, and the relative morality of actions, unknown under MWI, and known under SI, does not change, and consequently no moral decision (involving comparison of moral values) changes before there is a high quality moral system based on MWI. As a quick hack moral system based on MWI is likely to be considerably incorrect and lead to rash actions (e.g. quantum suicide that actually turns out to be as bad as normal suicide after you figure stuff out) The ship is compartmentalized against hole in the hull, not against something great happening to it. Incorrect idea with high confidence can be a hole in the hull; the water be the resulting nonsense overriding the system.
9jimmy14y
That's the idea behind Reason as memetic immune disorder. Sure, compartmentalization can protect you from your failures, but it also protects you from your successes. If you can understand Reason as memetic immune disorder, you should also be able to get to the level of taking this into account. That is, think about how there is a long history of failure to compartmentalize causing failures- a history of people making mistakes, and asking yourself if you're still confident enough to act on it.
1AnnaSalamon14y
I replied to your comment here.
0timtyler14y
The author was an idiot. I too found the fatal flaw in about five minutes - in a bookshop. IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.

Because lots of people (either not as educated or not as intelligent) didn't realize how highly flawed the book was. And when someone is being taken seriously enough that they are an expert witness in a federal trial, there's a real need to respond. Also, there were people like me who looked into Behe's arguments in detail simply because it didn't seem likely that someone with his intelligence and education would say something that was so totally lacking in a point, so the worry was that one was missing something. Of course, there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces. Finally, there's the other irrational aspect that Behe managed to trigger lots of people to react by his being condescending and obnoxious (see for example his exchange with Abbie Smith where he essentially said that no one should listen to her he because he was a prof and she was just a lowly grad student).

0timtyler14y
Re: "there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces" I think that was most of it - plus the creationsts were on the other side, and the they got publicly bashed for a long time. I was left wondering why so many intelligent people wasted so much energy and time on such nonsense for so long. Dawkins and Dennet have subsequently got into the god bashing. What a waste of talent that is. I call it their "gutter outreach" program.
7JoshuaZ14y
Standard beliefs in deities are often connected with a memetic structure that directly encourages irrationalism. Look at the emphasis on "faith" and on mysterious answers. If one is interested in improving rationality, removing the beliefs that directly encourage irrationality is an obvious tactic. Religious beliefs are also responsible for a lot of deaths and resources taken up by war and similar problems. Removing those beliefs directly increases utility. Religion is also in some locations (such as much of the US) functioning as a direct barrier to scientific research and education (creationism and opposition to stem cell research are good examples). Overall, part of why Dawkins has spent so much time dealing with religion seems to be that he sees religion as a major barrier for people actually learning about the interesting stuff. Finally, note that Dawkins has not just spent time on dealing with religious beliefs. He's criticized homeopathy, dousing, various New Age healing ideas, and many others beliefs.
-1timtyler14y
I figure those folk should be leading from the front, not dredging the guttering. Anyone can dispense with the ridiculous nonsense put forth by the religious folk - and they do so regularly. If anything, Dennet and Dawkins add to the credibility of the idiots by bothering to engage with them. If the religious nutcases' aim was to waste the time of these capable science writers - and effectively take them out of productive service - then it is probably "mission acomplished" for them.
4JoshuaZ14y
So what would constitute leading from the front in your view? But there are a lot of science writers now. Carl Zimmer and Rebecca Skloot would be two examples. And the set of people who read about science is not large. If getting people to stop having religious hangups with science will make a larger set of people reading such material how is that not a good thing?
0timtyler14y
I was much happier with what they were doing before they got sucked into the whirlpool of furious madness and nonsense. Well, "Freedom Evolves" excepted, maybe. Your question apparently presumes falsehoods about my views :-(
0JoshuaZ14y
Clarify please? What presumptions am I making that are not accurate?
4Perplexed14y
If I may attempt an interpretation, Tim is saying that the Great Minds should be busy thinking Great Thoughts, and that they should leave the swatting of religious flies to us lesser folk.
2timtyler14y
"Why Richard Dawkins Doesn't Debate Creationists": * http://www.youtube.com/watch?v=BhmsDGanyes Yudkowsky proposes that we let them debate college students: * http://lesswrong.com/lw/17f/let_them_debate_college_students/
3timtyler14y
Uh, I never claimed that getting people to stop having religious hangups was not a good thing in the first place.
0JoshuaZ14y
Ah, sorry bad phrasing on my part. Withdraw last question, and replace end with following argument "And the set of people who read about science is not large. Getting people to stop having religious hangups with science will make a larger set of people reading such material is a good thing, and people like Dawkins will do that aspect more effectively than if they were simply one of many science popularizers talking to largely the same audience."
0timtyler14y
As I understand it, there is precious little evidence of much marginal benefit - no matter who is making the argument. The religious folk realise it is the devil talking, put their fingers in their ears, and sing the la-la song - which works pretty well. Education will get there in the end. We have people working on that - but it takes a while. The internet should help too. Dennett once explained: "Yes, of course I'd much rather have been spending my time working on consciousness and the brain, or on the evolution of cooperation, for instance, or free will, but I felt a moral and political obligation to drop everything for a few years and put my shoulder to the wheel doing a dirty job that I thought somebody had to do." Someone has to clean the toilets too - but IMO it doesn't have to be Daniel Dennett.
[-][anonymous]14y110

If you don't read creationists, it looks like there aren't any, and it looks like "evolution fans" are banging on about nothing. But, in reality, there are creationists, and they were also banging on in praise of the book. David Klinghoffer, for instance (prominent creationist with a blog.)

Don't take ideas seriously unless you can take uncertainty seriously.

Taking uncertainty seriously is hard. Pick a belief. How confident are you? How confident are you that you're that confident?

The natural inclination is to guess way to high on both of those. Not taking ideas seriously acts as a countermeasure to this. It's an over-broad countermeasure, but better than nothing if you need it.

Warning: This comment consists mostly of unreliably-remembered anecdotal evidence.

When I read the line "The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.", my immediate emotional reaction was "No!!! You don't want this experience!!! It's terrifying!!! Really terrifying!!!" And I didn't notice any exhilaration when it happened to me. Ok, there were some things that were a really big relief, but nothing I wouldn't consider exhilarating. I guess I'll talk about it some more...

The big, main push of my deconversion happened during exam time, in... what was it? my second year of university? Anyway, I had read Eliezer's writings a few days (weeks? months?) ago, and had finally gotten around to realizing that yes, seriously, there is no god. At the time, I had long since gotten into the habit of treating my mind's internal dialogue as a conversation with god. And I had grown dependent on m... (read more)

What are ideas you think Less Wrong hasn't taken seriously?

I think LW as a whole (but not some individuals) ignored practical issues of cognitive enhancement.

From outside-in:

  • Efficient learning paths. Sequences are great, but there is a lot of stuff to learn from books, and would be great to have dependencies mapped out with the best materials for things like physics, decision theory, logic, CS stuff.

  • Efficient learning techniques: there are many interesting ideas out there, but I do not have time to experiment with them all, such as Supermemo, speed reading.

  • Hardware tools. I feel like I am closer integrated with information with iphone/ipad, if reasonable eyewear comes to market this will be much enhanced.

  • N-back and similar.

  • Direct input via braiwaves/subvocalisation.

  • Pharmacological enhancement.

  • Real BCIs, which are starting to come to market servicing disabled people.

Even if these tools do not lead to Singularity (my guess) they might give edge to FAI researchers.

9Jonathan_Graehl14y
dual n-back: for the past month, I've spent 2-5 minutes most days on it. I can do dual 4-back with 95%+ accuracy and 5-back with 60%, and I've likely plateaued (naturally, my skill rapidly improved at first). I enjoy it as "practice focusing on something", but haven't noticed any evidence of any general improvement in memory or other mental abilities. I plan on continuing the habit indefinitely.
6Will_Newsome14y
After doing 100 trials of dual N back stretched over a week (mostly 4 back) I noticed that I felt slightly more conscious: my emotions were more salient, I enjoyed simple things more, and I just felt generally more alive. There were tons of free variables for me, though, so I doubt causation. Did you notice anything similar?
6Kutta14y
A collection of anecdotal evidence from players is available in Gwern's great n-back FAQ. I've played for some two months earlier this year and my max level was 8. I haven't really noticed anything, but since I took no tests prior or after the training I can't really say a firm thing about it. The experience of getting better in n-back is exhilarating and bewildering enough that I plan to resume playing it soon. I mean, at the earlier levels I often felt intensely that a certain next level I just got to is physically impossible to beat, and behold, after a few days it seemed manageable, and after a week or so, trivial. All of this without any conscious learning process taking place, or any strategy coalescing. It's an especially unadulterated example how a brain that gets rewired feels from the inside.
6gwern14y
Yes, I know the same feeling (and have remarked on it once or twice on the DNB ML) - it's very strange how over a day or two one can suddenly jump 10 or 20% on a level and have a feeling that eg. suddenly D4B is clear and comprehensible, while before only D3B was and D4B was a murky mystery one had difficulty keeping in one's head. On the other hand - D8B? Dammit! I've been at n-backing for something like 2 years now, and have been stuck on D4B for months. You, Jonathan, and Will just go straight to D4B or D8B within a few months with ease. I must be doing something wrong. (On a sidenote, as in the FAQ, I ask people for their negative or null reports as well as their positive ones. This thread is unusual in 2 null reports to 1 positive, but I'm sure there are more LWers who've tried!)
1steven046114y
I did maybe 10-15 half-hour sessions of mostly D5B-D6B last year over the course of a few weeks and didn't notice any effects.
0gwern14y
Thanks; I've added it.
0Normal_Anomaly12y
All the links to your FAQ in this thread are broken. Does the FAQ still exist?
0gwern12y
Oh sure, it's just that I finally built a real website (as opposed to continuing to abuse Haskell.org's free hosting): http://www.gwern.net/DNB%20FAQ Needless to say, it's been expanded a lot since then.
5Jonathan_Graehl14y
Sure. I tried a bunch of things at once, with the purpose of feeling and thinking better. Collectively, they worked. However, this means that I have probably just acquired a bunch of ungrounded superstitions. I've recorded what I did but haven't learned anything from that data other than: I am unlikely to ever continue a daily practice of either napping or meditating. I would speculate that dual n-back is a repetitive and simple enough* stimulus that it's likely to offer whatever "self-awareness" benefits I felt in meditating. * it's simple in coarse physical terms; obviously the actual sequences are randomly varied
2Will_Newsome14y
Do you know of a good online resource that I could use to get a fuller picture of the different approaches of cognitive enhancement? I've used Anders Sandberg's page before but I imagine it's rather out of date.
0xamdam14y
Unfortunately no. I suspect Michael Vassar might have some ideas, if you can get him to write something up. Otherwise hoping someone else chips in.
2Will_Newsome14y
I believe he did send me a hugeeeee folder of IA-related papers and such at some point (or Justin Shovelain forwarded it to me). I'll try to find it.
2xamdam14y
Yes, please share what you can. Or forward to znkxurfva@tznvy.pbz if possible.
2Will_Newsome14y
Alright, I can't find what I was looking for, but after the Singularity Summit I'll see what kinds of resources I can get from Vassar and Justin.
7Paul Crowley14y
This would be worth a top level post by itself, wouldn't it?
1simplicio14y
I'd appreciate being cut in too! My e-mail is ispollock [at] gmail.com
0Randaly14y
Could you email it to nojustnoperson [at] gmail.com, too?
0xamdam14y
BTW, some of the Englebart-related posts seem relevant here.

Now, I must disclaim: taking certain ideas seriously is not always best for your mental health.

I'm highly skeptical of these claims of things that are true but predictably make you insane. Are you sure you aren't just coddling yourself, protecting yourself from having to change your mind? More to the point, that sounds like a pretty good memetic evolution to protect current beliefs. "I've always held that X is false. Surely if I came to believe that X was true I would surely go insane or become evil! Therefore X is false!"

Once upon a time I would have thought that accepting the fact that there is no ultimate justice in the universe would drive me insane or lead to depression. Yet I have accepted that fact, and I'm as happy as ever. (Happiness set points are totally unfair, but they're good for some things.)

4jimrandomh14y
I doubt that there are any true things that can predictably make anyone go insane, but something tailored to a specific could. And there are some statement+person-type pairs that seem to reliably damage mental health, so there's at least some real danger there. Of course, these are very rare, so the prior probability for an idea being harmful should be very low, and it shouldn't be considered as a possibility without strong external evidence (such as examples of it happening to other people; but a mere intuitive judgment would not be sufficient evidence).
4Will_Newsome14y
Right, and importantly it's not just clinical insanity that is damaging. When you're working on a hard and important problem, any type of mental irregularity or paralysis is potentially very harmful. I suppose most people don't do the kinds of research where this is a real concern, but some do, and I figured it was important to address the small population of LW that might take one idea too many seriously and end up needlessly paranoid/depressed/etc because of a foolhardy desire to be completely 'rational'. The Litanies of Tarski and Gendlin have exceptions, but those exceptions should not justify excuses. If you do not heed the exceptions you won't be in a state to excuse yourself: the inferential distance would be too dangerous and too large.

Does anyone not have any problems with taking ideas seriously? I think I'm in this category because ideas like cryonics, the Singularity, UFAI, and Tegmark's mathematical universe were all immediately obvious to me as ideas to take seriously, and I did so without much conscious effort or deliberation.

You mention Eliezer and Michael Vassar as people good at taking ideas seriously. Do you know if this came to them naturally, or was it a skill they gained through practice?

5Will_Newsome14y
I recall you asked a similar question near the end of the decision theory workshop. I think that every long term SIAI member has no problem with this skill (though of course there's some variance, and it's hard to know what everyone is thinking; also some are more consistent than others). Outside of SIAI there seem to be a lot less examples, but a few names come to mind. (Wei Dai is one of them.) I have no idea about Michael Vassar. I do know that he seems to have had this skill for many years at least from various papers and comments I've seen of his that were way ahead of everyone else at the time when it came to identifying the most relevant and critical arguments. But it does seem like Eliezer was born with a natural predisposition towards this kind of rationality if the examples from his childhood and teenage years found in the sequences are considered reasonably accurate.
3Wei Dai14y
It seems like this post could use some more empirical data, and you're probably in a good position to gather it. You said that every long term SIAI member has no problem with this skill (which makes sense because if they did have a serious problem with this skill they probably wouldn't have become a long term SIAI member in the first place) but how did they become that way? What kind of things did they find useful for getting better at it?
2cousin_it14y
For what it's worth, I have a strong injunction against taking ideas seriously. I always seem to want better proofs than are available. This doesn't look like a double standard from inside: I disbelieve in the Singularity only slightly more than I disbelieve in space elevators and fusion power in the near future. I wonder why you take Tegmark's multiverse seriously. It seems to be the odd one out on your list, an obviously wrong idea. Have they found a workaround for the problem of teacups turning into pheasants?
8Wei Dai14y
I'm surprised that you weren't aware that I took Tegmark's multiverse seriously, since I mentioned it in the UDT post. It was one of the main inspirations for me coming up with UDT. You can see here a 2006 proto-UDT that's perhaps more clearly based on Tegmark's idea. Well, UDT is sort of my answer to that. In UDT you can no longer say "I assign a small probability for observing this teacup turning into a pheasant" but you can still say "I'm willing to bet a large amount of money that this teacup won't turn into a pheasant." See also What are probabilities, anyway? I'm not sure if that answers your question, so let me know. (You might also be interested in UDASSA, which was an earlier attempt to solve the same problem.)
1cousin_it14y
This sounds circular to me. Why are you willing to bet a large amount of money that this teacup won't turn into a pheasant? Why do we happen to have a "preference" for a highly ordered world?
5Wei Dai14y
One approach to answering that question is the one I gave here. Another possibility is that there is something like "objective morality" going on. Another one is that our preferences are simply arbitrary and there is no further explanation. So I think this is still an open question, but there's probably an answer one way or another, and the fact that we don't know what the right answer is yet shouldn't count against Tegmark's idea. Furthermore, I think denying Tegmark's idea only leads to more serious problems, like why does one universe "exist" and not another, and how do we know that one universe exists and not two or three?
0cousin_it14y
There may be a grain of truth in this kind of theory, but I cannot see it clearly yet. How exactly do you separate statements about the mind ("probability as preference") from statements about the world? What about bunnies, for example? Bunnies aren't very smart, but their bodies seem evolved to make some outcomes more probable than others, in perfect accord with our idea of probability. The same applies to plants, that have no brains at all. Did evolution decide very early on that all life should use our particular "random" concept of preference? (How is it encoded in living organisms, then?) Or do you have some other mechanism in mind?
1Vladimir_Nesov14y
The shared traits come from shared evolution, that operates in the context of our physics and measure of expected outcomes. The concept of expectation implies evolution (given some other conditions), and evolution in its turn makes organisms that respect the concept of expectation (that is, persist within evolution, get selected).
1cousin_it14y
If you believe in "measure of expected outcomes", there's no problem. Wei was trying to dissolve that belief and replace it with preference encoded in programs, or something. What do you think about this now? To make it more pithy: are there, somewhere in the configuration space of our universe, evolved pointy-eared humanoids that can solve NP-complete problems quickly because they don't respect the Born probabilities? Are they immune to "spontaneous existence failure", from their own point of view?
2Vladimir_Nesov14y
What do you mean by "believe"? To refer to the concept of evolution (as explanation for plants and bunnies), you have to refer to the world, and not just the world, but the world equipped with measure (quantum mechanical measure, say). Without that measure, evolution doesn't work, and the world won't behave as we expect it to behave. After that is understood, it's not surprising that evolution selected organisms that respect that measure and not something else. So, I'm not assuming measure additionally, the argument is that measure is implicit in your very question. The NP-solving creatures won't be in our universe in the sense that they don't exist in the context of our universe with its measure. When you refer to our universe, you necessarily reference measure as part. It's like a fundamental law, a necessary part of specification of what you are talking about.
1cousin_it14y
Um, no. I don't know of any fundamental dynamical laws in QM that use measure. You can calculate the evolution of the wavefunction without mentioning measure at all. It only appears when we try to make probabilistic predictions about our subjective experience. You could equip the same big evolving wavefunction with a different measure, and get superintelligent elves. Or no?
1Vladimir_Nesov14y
Yes, but then you won't be talking about our world in the usual sense, because, say, classical world won't work as expected anymore given those laws (measure). If you don't include measure, you don't get any predictions about what you expect to see in reality, while that's what physics is normally all about.
2cousin_it14y
Uh... So, our subjective experience matches the Born probabilities because our minds are implemented with macroscopic gears, which require classical physics (and thus Born probabilities) to function in a stable manner? This sounds like it might be an explanation, but we'd need to show that other probability rules lead to unstable physics (no planets, or no proteins, or something like that). And even if we had proof of that, I think some leftover mystery would still remain.
3Vladimir_Nesov14y
I begin to feel that the mystery has been dissolved. Even if other measures (or indeed physical laws) lead to lawful enough processes that can also support evolution, it doesn't impact the notion of anticipation, because our anticipation matches our evolution, and our evolution exists in the process under our measure. Also, it's not specifically minds that are macroscopic and depend on measure, it's evolution itself that is thus macroscopic and selects replicators that replicate under that measure. For minds, anticipation matching measure is just another psychological adaptation, not necessarily a perfect match, but close enough. As another crazy hypothesis, building on the previous one, it's possible that we don't particularly care about our reality or our measure, like we don't care whether a person is in a biological body or uploaded, so that we will build our goodness out of different mathematics, having no effect on our reality. Thus, when we run the FAI, "nothing happens" in our world. Let's hope this applies to most UFAIs, that will therefore have no ill effect, because they don't care about our world or our measure.
1cousin_it14y
I disagree with your first two paragraphs. Without a demonstration that the Born rule is somehow special (yields the most stable world for working complex machines, or something), the argument is still disappointingly circular. For example, if some other rule turns out to be even more conducive to evolution, the anthropic question arises: why aren't we in that world instead of this one? (Kinda like the Boltzmann brain problem, but in reverse.) Fortunately, checking the macroscopic behavior that arises from quantum physics under different assumed measures is a completely empirical question. Now I just need to understand enough math to build a toy model and see for myself how it pans out. For the record, I'm about 70% confident that this line of inquiry will fail, because other worlds will look just as stable and macroscopically lawful as ours. An FAI that doesn't help our world is a big fat piece of fail. Can I please have a machine that's based on less lofty abstractions, but actually does stuff?
2Vladimir_Nesov14y
Could you frame the debate to avoid ambiguity? What argument do you refer to (in your own words)? In what way is it circular? (I feel that the structure of the argument is roughly that the answer to the question "what is 2+2?" is "4", because the algebraic laws assumed in the question imply 4 as the answer, even though other algebraic laws can lead to other answers.) We just aren't, this question has no meaning. Why are you you, and not someone else? When you refer to yourself, you identify a particular concept (of "yourself"). That concept is distinct from other concepts, and that's the end of it. Two given concepts are not identical, as defined. It's entirely possible that other rules (measures) are also conductive to evolution, but look at them as something happening "far away", like in universes with different fundamental constants. And over there, other creatures could've also biologically evolved. I'm not arguing with that, so finding other rules that produce good-enough physical processes doesn't answer any questions. Why am I a human, and not a dolphin? We can't outright assume anything about preference. We need to actually understand it. Powerful optimization is bound to be weird, so absurdity heuristic goes out the window. And correspondingly, the necessary standard of understanding goes up a dozen of notches. We are so far away from the adequate level that if a random AGI is built 30 year from now, we still almost certainly fail to beat it. Maybe 50 or 100 years (at which point uploads start influencing progress) sounds more reasonable, judging by the rate of progress in mathematics. We need to work faster.
1cousin_it14y
You are committing the general error of prematurely declaring a question "dissolved". It's always better to err in the other direction. That's how I come up with all my weird models, anyway. I just took a little walk outside and this clarification occurred to me: imagine an algorithm (Turing machine) running on a classical physical computer, sitting on a table in our quantum universe. The computer has the interesting property that it is "stable" under the Born rule: a weighted-majority of near futures ranked by the 2-norm have the computer correctly executing the next few steps of the computation, but for the 1-norm this isn't necessarily the case - the computer will likely glitch or self-destruct. (All computers built by humans probably have this property. Also note that it can be defined in terms of the wavefunction alone, without assuming weights a priori.) Then the algorithm will have "subjective anticipation" of a weird kind: conditioned on the algorithm itself running faithfully in the future, it can conclude that future histories with higher Born-weight are more likely. This idea has the drawback that it doesn't look at histories of the outside world, only the computer's internals. But maybe it can be extended to include observations somehow?
1Vladimir_Nesov14y
"Beginning to feel" that the question is dissolved is far from the level of certainty required to "declare it dissolved", merely a hunch that it's the right direction to look for the answer (not that it's a question I'm especially interested in, but it might be useful to understand it better). I agree with your description in the second paragraph, but don't clearly see what you wanted to communicate through it. (Closest salient idea is Hanson's "mingled worlds".)
3Vladimir_Nesov14y
Evolution happened in that ordered world, and it built systems that are expected (and hence, expect) to work in the ordered world, because working in ordered world was the criterion for selecting them in that ordered world in the past. In order to survive/replicate in an ordered world (narrow subset of what's possible), it's adaptive to expect ordered world.
0Vladimir_Nesov14y
...which seems to be roughly the same "reality is a Darwinian concept" nonsense as what I came up with (do you agree?). You can still assign probabilities though, but they are no longer decision-theoretic probabilities.

It seems like you're vulnerable to time-wasting Doom memes. But perhaps you're aesthetically/heuristically selective about which you take seriously. And perhaps it's this obsessing you do that gives you not just time served in a frenzy of caring, but actually true (and possibly instrumental) ideas as a byproduct.

4Will_Newsome14y
I'm also vulnerable to ideas that seem like they could lead to gaining infinite computing power in finite time. Being a bounded agent means I care only finitely much about infinite utility, but I still look into lots of ways that one could get infinite computing power that I'm sure most people would ignore outright. I'm not sure what it means to be vulnerable to time-wasting Doom memes. I spend at the very most six hours a day really researching the possibility, probability, and survivability of Doom. Most days I spend 2 hours. I guess I could spend that time learning to play the piano or summat, but that'd feel kinda weak by comparison. And I have all those other hours to learn how to play piano and paint and cook and be awesome at everything. And on top of it seemingly being an extremely good use of my time, it's fun for me as a nerd to be on the forefront of certain kinds of metaphysics and decision theory research. The kind of Doom memes I take seriously are the ones that seem the most probable, of course. uFAI for instance seems really damn probable. The heuristics I use are the ones I outline in my post above about how to take ideas seriously. If I run an idea through those heuristics, and throw the kitchen sink of Less Wrong rationality techniques at it, then I start to take it rather seriously.
0Jonathan_Graehl14y
I didn't mean to imply that all such thoughts are a waste, or that any of the usual worries around here are silly. I meant that if you really feel obligated to take seriously claims of alarming differences in utility, that you'd end up wasting a time digging through ridiculous religious claims. Clearly it's not the case that you do this.
2Will_Newsome14y
Hm, I wonder how many atheists have taken Pascal's wager seriously. If I'm not confident of the flaws of majoritarianism then failing to Aumann update on the testimony of a billion Christians would seem to be a bad idea. And if I think that the belief of a billion Christians is even small evidence that a Christian god is more likely to control most of the measure of computations that include myself than any other god then the atheist-god wager argument doesn't save me from having to disregard a possibility for infinite utility. But perhaps I forget the stronger arguments against Pascal's wager. At any rate, you're right that I don't go around looking for ridiculous religious claims to worry about, but I'm at least willing to take Pascal's wager a little bit seriously. (Failing to do so can also lead to falling into the Pascal's wager fallacy fallacy.)
5FAWS14y
You don't just have to worry about one specific atheist-god, but also any jealous gods, any singular god that would consider beliefs about a singular god beliefs about themself, and feel insulted by being thought to be like what JHWH is supposed to be like, any god that punishes giving in to imagined blackmail (hell) just to make blackmail less likely, and so on. These aren't symmetric because e. g. anti-jealous gods that reward worship of very different gods, including one particular very jealous god, seem less likely than jealous gods.
3Will_Newsome14y
Hmuh, I'd never exactly thought of thinking about YHWH as a blackmailing simulator AI, but in an ensemble universe that description seems to fit. That's pretty funny. :)
1Jonathan_Graehl14y
Agreed - this is the usual response, and the one that works for me if I can't quite muster up the confidence to say "0% probability for infinite-torture JHWH (or variation)". I guess you can justify something like p=0 with a combination of: "you haven't defined what you mean by JHWH sufficiently for me to agree or disagree", "ok, you've told me enough that I see JHWH as a logical impossibility". Once a hypothetical god passes those bars, then you need recourse to all the possible god hypotheses. Priveleging the Hypothesis is a finite-scale version of the same objection.

I would not question what you are taking seriously and it seems fairly typical of the LW group.

On the other hand, I am surprised that climate change is rarely or never mentioned on LW. The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.

You do not mention the neuroscience revolution but I am sure I have noticed some of the LW group taking it seriously.

This may be the place to mention cryonics without starting anot... (read more)

JanetK:

The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.

Setting aside the more complex issue of climate change for the moment, I'd like to comment specifically on this part. Frankly, it has always seemed to me that alarmism of this sort is based on widespread popular false beliefs and ideological delusions, and that people here are simply too knowledgeable and rational to fall for it.

When it comes to the "loss of biodiversity," I have never seen any coherent argument why the extinction of various species that nobody cares about is such a bad thing. What exact disaster is supposed to befall us if various exotic and obscure animals and plants that nobody cares about are exterminated? If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?

Regarding the preservation of wild nature in general, it seems to me that the modern fashionable views are based on some awfully biased and ignorant assumptions. Peo... (read more)

7KrisC14y
Why is biodiversity important? Protection from disease When there are a variety of species, a single pathogen is less likely able to ravage an ecosystem. Protecting minority humans A species of negligible value to a dominant society may be of critical value to a marginal society. Protection of sentient species Some endangered species are capable of learning language. Some humans are not. I typically value worth on a combination of mental traits. Some animals are capable of holding jobs. Some humans are not. Many people often value worth by productivity. Some animals are more valuable than some humans. Natural history DNA is subject to statistical analysis. This analysis can provide insight into previous environments and the adaptations needed to survive them. Humans may have a future use for a solution already encoded by another species. Undiscovered potential Most models would place a non-negligible value upon an unknown self-replicating organism that has been adapted to the modelers environment after several million generations. At the very least, identification, classification, and understanding would be attempted before placing a value. Value from scarcity Economics. As supply decreases, cost increases. Ethics Treat others as you would have yourself be treated. Don't afflict others with the negative consequences of your actions. Protect the oppressed. Be a good neighbor. Share. Improve your environment for the next visitor. Because you may be judged by the rules you apply to others. It is good to reconsider our memes, but for me biodiversity passes. I've tried to keep this brief in order to maintain clarity.
7kodos9614y
Biologists have DNA samples of every known species. Ok, but how much value would it place on an organism which wasn't adapted to the modelers environment, as demonstrated by the fact that it was selected against and went extinct? OK, but what reason, other than status quo bias, is there to prefer one result over the other? If so, then protecting that species is in the interests of the human population in question, and it then becomes of question of how best to serve their human interests. But that doesn't get you anywhere as far as biodiversity, in and of itself, having instrumental value. You probably mean price, not cost... but what does that have to do with anything? We're trying to establish that biodiversity has a utilitarian purpose... how does this address that? If something is useless, who cares how much supply of it there is, or how it's priced? This is just begging the question I agree that non human sentient species deserve protection, both because their existence has utility (in understanding the phenomenon of intelligence), and because I consider the protection of sentient life to be a terminal value. But what does that have to do with "biodiversity"?
5KrisC14y
Thanks for the reply. I do not believe that be true. Even if it is, a single sample is insufficient for a meaningful statistical analysis. Non-negligible, depending on the criteria. It was my belief that human caused environmental destruction was the issue at hand. The organism was adapted for human's natural environment (most of Earth), the environment changed. The current environment supports human life. The recent bee scare was a multi-continent threat to a species very important to our way of life. Pardon me, I thought I had changed that. People who value money. Valdimir_M wrote: "If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?" I took the creation of Friendly AI to be an ethical consideration which was accepted by all commenters. I think the relationships are parallel. I had in mind elephants, primates, and cetaceans. Each of these groups faces existential risks. Maintaining biodiversity is protecting species from extinction. Sentient species are a specific subset. I was trying to argue for the propagation of the biodiversity meme. I felt that Vladimir_M was contradicting that meme. I thought I was being clear that my argument was not meant to be purely utilitarian (in which case I would have used values, or at least comparisons), but instead to argue that biodiversity has value within a variety of systems.
9NancyLebovitz14y
I believe it's false. Good-sized animals are still being discovered. The ecology of micro-organisms is still being explored. A lot of what's worth finding out at our present level of knowledge isn't about whole organisms, it's about specific aspects-- consider the work being done with spider silk. Spider silk would probably still be valuable even if there weren't any living spiders.
5NancyLebovitz14y
On the small side, but pea-sized frog recently discovered.
2wedrifid14y
We were? Pardon me, my mistake. Please consider anything I wrote on the subject retracted. I'm a conscientious objector to utilitarianism.
2kodos9614y
If biodiversity is a terminal value of yours, then I can absolutely respect that, to exactly the same degree as anybody else's terminal values. But the commenter I was replying to clearly seemed to be arguing that biodiversity has instrumental value.
4wedrifid14y
I reference here only the difference between Utilitarianism and Consequentialism (with the former being often referenced but largely naive). Come to think of it if 'providing happiness or pleasure as summed among all sentient beings' is actually the measure of instrumental value then you really only need a dozen species of plant and you've got all the 'happiness and pleasure' humans are likely to need.
2XFrequentist14y
Interesting, I just had a chat about this hypothesis with a Lyme disease expert. Lyme is apparently held up as the best example for this argument, but field data and mathematical modeling indicate that it isn't true (I could probably dig up the relevant paper if you're interested, but I haven't read it). I don't know for sure about other zoonotic diseases in wildlife, but I don't think this is certain enough to just be stated as fact. Your other points seem worthy of consideration, but on the whole it seems the marginal benefit of a member of this crowd worrying about biodiversity, while not utterly negligible, is small.
4wedrifid14y
Me. And I'm not alone. Many humans do value the preservation of significant elements of biodeversity that don't have any 'concrete', objective value. This is arbitrary only in the sense that any terminal value is arbitrary. I suggest that it is not nearly as 'easy' to find someone to preserve a given part of the commons, even when that part would be considered value by the extrapolated volition of the population.
3teageegeepea14y
As George Carlin once said, "The earth will manage just fine. We're the ones who are fucked!". The fact that nature will endure is not that reassuring to any particular apex predator. I agree though that "biodiversity" needs some backup arguments for us to care about it.
2JanetK14y
I am not a person who believes in providence, or the market's invisible hand, or the balances that protect democracy, or Gaia. There are systems that are stable for long periods because of massive negative feedback. But that very feedback can turn positive under unusual situations, equalibriums can disappear and systems collapse. I do not know whether we are going to 'fall over a cliff' and I don't think others do either. We just don't know enough. It is certainly clear to me that we are in danger, just not how much. WE DO NOT KNOW. The planet has had periods of mass extinction before and has recovered, but the recovered biosphere was very different from the lost one. Technically we are losing species at the sort of rate that appeared during previous mass extinctions. Humans may be the 'dominant life form' that loses out this time.
1kodos9614y
I don't have a good cite handy, but I've read enough on the subject over the years to say confidently that, technically, no, this is just not the case.
3JanetK14y
Here are some links to numbers and graphs: http://www.pbs.org/wgbh/evolution/library/03/2/l_032_04.html http://www.whole-systems.org/extinctions.htmls http://www.sourcewatch.org/index.php?title=The_Sixth_Great_Extinction The rate to extremely high and that rate will continue (probably increase) if nothing is done.
0DilGreen13y
I share the puzzlement of others here that after a post where bioterrorism, cryonics and molecular nanotechnology are listed as being serious ideas that need serious consideration - by implication, to the degree that they might significantly impact upon the shape of one's 'web of beliefs' - that the topics of climate change and mass extinction are given such short shrift, and in terms that, from my point of view, only barely pass muster in the context of a community ostensibly dedicated to increasing rationality and overcoming bias. I find little rationality and enormous bias in phrases like; "... why the extinction of various species that nobody cares about is such a bad thing". The ecosystem of the planet is the most complex sub-system of the universe of which we are aware - containing, as it does, among many only partially explored sub-systems, a little over 6 billion human brains. Given that one defining characteristic of complex systems is that they cannot be adequately modelled without the use of computational resources which exceed the resources of the system itself [colloquially understood as the 'law of unintended consequences'], it seems manifestly irrational to be dismissive of the possible consequences of massive intervention in a system upon which all humans rely utterly for existence. Whether or not one chooses to give credence to the Gaia hypothesis, it is indisputable that the chemical composition of the atmosphere and oceans are conditioned by the totality of the ecosystem; and that the climate is in turn conditioned largely by these. Applying probabilistic thinking to the likely impact of bio-terrorism on the one hand, and climate change on the other, we might consider that, um, five people have died as a result of bioterrorism (the work, as it appears, of a single maverick and thus not even firmly to be categorised as terrorism) since the second world war, while climate change has arguably killed tens of thousands already in floods, droughts,
9thomblake14y
N.B. There is no place on Less Wrong where you can post that you are not signed up for cryonics and not have people immediately pester you about it. Especially if you are old or dying.
5Paul Crowley14y
I would not pester someone who was old or dying about cryonics unless I had reason to believe they also had a spare $30,000 in the bank.
3orthonormal14y
If someone figured out today how to reverse and stave off aging, wouldn't you want to give it a try (and wait a while before deciding on mortality)? If so, this isn't a very good objection to cryonics.
4JanetK14y
If I have my own body and it was healthy with some time left, that would be fine. I suppose if we are imagining a surviving brain then what is the problem with getting a re-build to reverse age and whatever caused the death.
1Larks14y
Today, I wish to live one more day. On any given day, I wish to live one more day. Therefore, I wish to live forever, by induction on the positive integers. -Eliezer, and Harry Potter.
3NancyLebovitz14y
Climate change could make a large difference to FAI issues, in addition to matters of biodiversity. In particular, climate change could make the world a good bit poorer simply because the infrastructure is built around specific expectations about weather, climate, and atmospheric composition. This isn't just the buildings (though that's important and not easily changed), but the seed stocks and specific details of large scale agriculture. Amateur FAI research is possible because there's a lot of money floating around that doesn't have more urgent uses and isn't under the control of people who mostly use money for conventional status signaling.
1kodos9614y
Sudden, drastic climate change, sure. But I'm not aware of any reason to believe we should be expecting that.... certainly not on the kind of time scales that the LW consensus seems to predict for the singularity.
0kodos9614y
My hypothesis would be that this is due to these issues falling within the Correct Contrarian Cluster
2JanetK14y
I don't understand your comment. Do you mean that climate change and biodiversity are not discussed because everyone in LW thinks the same about them? because there is nothing to say? because there is nothing that can be done? because it is settled science? Please explain how issues falling within the correct contrarian cluster are not discussed at all and why you think that these issues fall within the cluster.
3kodos9614y
Well, I was just speculating - I don't actually have any idea what the LW community in general thinks of the issue. What I was attempting to speculate is that the reason these topics aren't discussed much is because the contrarian/skeptical position on them is clustered with the set of contrarian positions commonly held by LWers, and therefore aren't discussed much since the contrarian position on them is basically that they aren't deserving of much attention, especially relative to the kinds of existential risks LW is concerned with. I'm not sure how much more detail I can go into on my thinking without violating the "no current politics" rule.
5JanetK14y
Something I should have said in my previous reply. I agree with the "no current politics" rule. My problem is with what is politics - to some everything is and to some almost nothing is. When a subject is a purely scientific one and the disagreement is about whether there is evidence and how to interpret it, then this is a area for rationality. We should be looking at evidence and evaluating it. That does not involve what I would call politics.
3[anonymous]14y
When I first got here I thought "existential risk" referred to a generalization of the ideas related to catastrophic climate change. That is, if we should plan for the low-probability but deadly event that climate change will be very severe, then we should also plan for other low-probability (or far-future) catastrophes: asteroid impacts, biological and nuclear weapons, and unfriendly AI, among others. I was surprised that, of the existential risks discussed, catastrophic climate change never seems to come up at all. It's possible that this is an innocent result of specialization: people here spend most of their time thinking about AI, and not about other things that they aren't trained for. If there were an organization committed to clarifying how we think about planning for low-probability risks, that organization really ought to consider climate change among other risks. It would be an interesting thing to study: how far in the future is it reasonable for present-day institutions to plan? How can scientists with predictions of possible catastrophe effectively communicate to governments, businesses, etc. that they need to plan, without starting a panic? The art of planning for existential risks in general is something that could really benefit from more study. And it ought to include well-studied and well-publicized risks (like climate change) in addition to less-studied and less-publicized risks (like risks from technology not yet developed.) People have been planning for floods for a long time; surely people concerned about other risks can learn something from people who plan for the risk of floods. But I don't think SIAI or LessWrong is equipped for that mission.
1Larks14y
I think you're looking for the Future of Humanity Institute and their work on Global Catastrophic Risks
0JanetK14y
It would be nice if people could use some rationality in deciding which ideas to be contrarian on. Maybe I live in an ivory tower but I don't see any connection between biological/environmental dangers and politics.

For existential risks we would probably benefit from having a wiki where we list all the risks and everyone can add information. At the moment there doesn't seem to be a space that really center our knowledge on them.

4Will_Newsome14y
Strongly seconded, but is it reasonable to expect this to happen spontaneously? I'm pretty sure SIAI lacks the human resources to do this. Less Wrongers could do this but only if the site was reasonably well-seeded first and had a moderately memorable url, good hosting, et cetera. We have a hard enough time with the LW wiki. Even then it would require at least a few dedicated contributors to avoid falling into decay. The catastrophic risks movement seriously lacks in human capital.
1amcknight12y
I think it would be easier and even more valuable to simply do this on wikipedia. The only downside is that we might not be able to reference as many LessWrong articles and concepts as we might like to.
[-][anonymous]14y50

Compartmentalization is, in part, an architectural necessity - making sure beliefs are all consistent with each other is an intractable computation problem (I recall reading somewhere that the entire computational capacity of the universe is only sufficient to determine the consistency of, at most, 138 propositions).

0amcknight12y
Working with OWL ontologies and other semantic web technologies eventually makes this very clear. Deductive reasoning is not scalable. But there are probably different levels/types of consistency that could be handled by brains like ours. A simple example: Hueristics that tend to bring to one's attention the most difficult to reconcile beliefs.
0JohannesDahlstrom14y
In the worst case scenario, with very pathological propositions. Even though the various important satisfiability problems are known to be in NP, there are known algorithms for those problems that are polynomial-time for almost all "interesting" inputs.

In the societal level, it leads to a world where almost no attention is paid to existential risks like EMP attacks.

How is an EMP attack an existential risk? EMPs, even large ones, are largely limited by line-of-sight. You can't EMP more than a continent in the most extreme circumstance. Large scale methods of making EMPs are either nukes or flux compression generators. The first provides more direct risk from targeting population centers. The second has a really cool name but isn't very practical and can't produce EMPs as large as a nuke. What am I missing?

3Will_Newsome14y
I was thinking nuclear EMPs which are very dangerous, but you're right to say they aren't of themselves existential risks; merely catastrophic ones. Edited post to reflect your criticism.

"What are ideas you think Less Wrong hasn't taken seriously?"

The moral status of the models (of others to predict their behaviour, of fictional characters, etc.) made by human brains, especially if there's negative utility in their eventual deletion.

[-][anonymous]13y20
  • Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).

In what areas are these implications? In particular, what are the implications for existential risk reduction?

I recently read "The Mathematical Universe" and this post but so far I haven't had any earth-shattering insights. Should I re-read the posts on UDT?

3Will_Newsome13y
We could be getting most of our measure from all sorts of places, which means that maybe a very small proportion of our measure is actually at stake. If all computations exist, some of those computations have preferences over other computations that include us. It might be good to understand such preferences. That in itself has many implications. But I guess I'd say that it's easy to go funny in the head when thinking about things like that, so be careful.

Despite some context, I'm still not very precisely sure why the author no longer 'endorses' this post.

But I also don't fully endorse this post, but I VERY much still endorse 'taking ideas seriously', and this post is still an important 'signpost' for that idea.

[-][anonymous]14y10

Ideas that should be taken more seriously by Less Wrong:

  1. Human beings are universal knowledge creators: they can create any knowledge that any other knowledge creator can create.
  2. The only known tenable way of creating knowledge is by conjectures and refutations.
  3. Induction is a myth.
  4. Theories are either true or false: there is no such thing as the probability that a theory is true.
  5. Confirmation does not make a theory more likely or better supported - the only role of confirmation is to provide a ready stock of criticisms of rival theories.
  6. The most importa
... (read more)
7Perplexed14y
Gee, I wonder what philosopher of science you have been reading. :) I would suggest that you read through the sequences with an open mind - particularly on your point #4. If you find it impossible to open your mind on that point, then open it to the possibility that the word "probability" can have two different meanings and that your point #4 only applies to one of them. If you find it impossible to open your mind to the possibility that a word might have an alternative meaning which you have not yet learned, then please go elsewhere. Regarding Popper, it is not so much that he is wrong, as that he is obsolete. We think we have learned that set of lessons and moved on to the next set of problems. If you have already begun reading the sequences, and were motivated to give us this dose of Popper because Eliezer's naive realism got on your nerves, well ... All I can say is that it got on my nerves too, but if you keep reading you will find that EY is not nearly as epistemologically naive as it might seem in the early sequence postings.
0[anonymous]14y
No Popper is not obsolete and clearly the lessons of Popper have not been learnt by many: consider the people who have not yet understood that induction is a myth. Consider, also, the people who constantly misrepresent what Popper said like saying his philosophy is falsificationism or that he was a positivist or that he snuck induction in via the back door (you can find examples of these kind of mistakes discussed here). Popper's ideas are in fact difficult for most people - they blow away the whole justificationist meta-context, a meta-context that permeates most people's thinking. Understanding Popper requires that you take him seriously. David Deutsch did that and expanded on Popper's ideas in a number of ways (you may be interested in a new book he has coming out called "The Beginning of Infinity"). He is another philosopher I follow closely. As is Elliot Temple (www.curi.us).
7Perplexed14y
Thanks for the links and references. I will look into them. I urge you once more to work your way through the sequences. It appears you have something to teach us, but I doubt that you will be very successful until you have learned the local jargon, and become sufficiently familiar with our favorite examples to use them against us. However, I have to say that I was a bit disconcerted by this: Now if you told me that the standard definition of induction misrepresents the evidence-collection process, or that you know how to dissolve the problem of induction, well then I would be all ears. But when you say that "induction is a myth" I hear that as saying that everyone who has thought seriously on the topic, from Hume to the present, ..., well, you seem to be saying that all those smart people were as deluded as the medieval philosophers who worried about angels dancing on the heads of pins. See the thing is, I would have to keep having to upvote such arrogance and stupidity, just so the comment to which I am responding doesn't disappear. And I don't want to do that.
2[anonymous]14y
You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction". That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas? Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley? They all agree with Popper that: Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).

You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction".

Of course. That is why I mentioned him.

That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas?

"Exploded". My! What violent imagery. I usually prefer to see problems "dissolved". Less metaphorical debris. And yes, I've read quite a bit of Popper, and admire much of it.

Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley?

Nope, I haven't.

They all agree with Popper that:

Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).

You know, when giving page citations in printed texts, you should specify the edition. My 1965 Harper Torchbook paperback edition does not show Popper saying that on p 70. But, no matter.

One of the few things I dislike about Popper is that he doesn't seem to understand statistical inference. I mean, he is totally clueless on t... (read more)

2[anonymous]14y
Perhaps you should know I have published papers where I have used Bayes extensively. I am well familiar with the topic (edit: though this doesn't make me any kind of infallible authority). I was once enthusiastic about Bayesian epistemology myself. I now see it as sterile. Popperian epistemology - especially as extended by David Deutsch - is where I see fertile ground.

Cool. But more to the point, have you published, or simply written, any papers in which you explain why you now see it as sterile? Or would you care to recommend something by Deutsch which reveals the problems with Bayesianism. Something that actually takes notice of our ideology and tries to refute it will be received here much more favorably than mere diffuse enthusiasm for Popper.

-4[anonymous]14y
The quote is from 3rd ed. 1968. You say you have read Popper, then you should not be surprised by the quote. Your response above is just the argument from incredulity. Do you have a better criticism?
5Perplexed14y
I'm not surprised by the quote. I just couldn't find it. It apparently wasn't in 2nd edition. But my 2nd edition index had many entries for "induction, myth of _" so I don't doubt you at all that Popper actually said it. I am incredulous because I know how to do inference based on a single observation, as well as inference based on many. And so does just about everyone who posts regularly at this site. It is called Bayesian inference, and is not really all that difficult. Even you could do it, if you were to simply set aside your prejudice that I have already provided references. You can find thousands more by Googling.
1[anonymous]14y
OK, tell me how you know in advance of having any theory what to observe? BTW, please don't assume things about me like asserting I hold prejudices. The philosophical position I come from is a full blown one. - it is no mere prejudice. Also, I'm quite willing to change my ideas if they are shown to be wrong.
5Perplexed14y
Ok, I won't assume that you believe, with Popper whom you quote, that inference based on many observations is impossible. I will instead assume that Popper is using the word "inference" very differently than it is used around here. And since you claim to be an ex-Bayesian, I will assume you know how the word is used here. Which makes your behavior up until now pretty inexplicable, but I will make no assumptions about the reasons for that. Likewise, please do not assume that I believe that observation is neither theory-laden nor theory-directed. As it happens, I do not know in advance of a theory what to observe. Of course, the natural thing for me to do now would be to challenge you to explain where theories come from in advance of observation. But why don't we both just grow up? If you have a cite for a careful piece of reasoning which will cause us to drop our Bayesian complaisancy and re-embrace Popper, please provide it and let us read the text in peace.
0[anonymous]14y
It sounds like Scurfield's "cite for a careful piece of reasoning" are the works of Karl Popper, which you are also familiar with. I don't have time to read the works of Karl Popper, but I have plenty of time to read blog comments about them. I've found every single comment in this thread interesting. Why discourage it?
2khafra14y
I think the problem is a communication gap--"Bayesian" can mean different things to different people; and my best guess is that Scurfield converted from Laplace's degree-of-belief approach to probability. Around here, though, the dominant Bayesian paradigm is Jaynes', which takes the critiques of Bayes from the 1920 through the 1970s into account and digs through them to the epistemological bedrock below pretty well. Unless Scurfield has something new to say about Jaynes' interpretation, his critiques aren't that interesting to people who already know both Popper and Jaynes.
2[anonymous]14y
That can't actually be everyone here. And I hope no one is offended if I say that Scurfield seems to "know Popper" to a greater degree than any of the other participants in this thread. Why the scorn for the guy and the conversation?
6Perplexed14y
He certainly knows Popper better than me. I scorn the conversation because it is not stimulating me - not causing me to consider ideas I have never considered before. I scorn the guy (scorn may be a bit too strong here, but just go with it) because so far he has mostly presented slogans, rather than arguments. (Admitedly, I haven't presented arguments either, but that is because his slogans strike me as either truisms or word games.) The only thing I gained from this encounter was the link to the Critical Rationalism web site, where can be found links to writings by Popper and others. The CR site itself is, ..., well, not great. For example, check out the "What is CR?" page where CR is contrasted with two other possible approaches to philosophy. Please actually check it out before continuing. Now weren't those subtle strawmen? :)
8Perplexed14y
It occurs to me that one thing he could do which would be both interesting and useful would be to go through the sequences, adding comments critiquing Eliezer's epistemology lessons from the viewpoint of Popper and/or CR. Who knows? I might frequently find myself agreeing with him.
4thomblake14y
Indeed, that's why I am in favor of voting on old comments. Ideally, people can continue to leave criticisms on the sequences, and good ones will rise to the top over time.
3thomblake14y
Yes, I asked for clarification of the slogans and got more slogans, and asked for arguments supporting the claims and was given the claims again. I decided at that point to disengage. Indeed - I hadn't bothered to check out the site, but it seems to me that most of the discipline of Philosophy falls outside "CR"'s "three major schools", and they're pretending Popper invented philosophy. It's really quite terrible.
-1[anonymous]14y
If I may use another "slogan": communication is difficult. And another: misunderstandings are common. When you asked for clarification I wasn't sure what you wanted. I guessed and looks like I got it wrong. So you just withdraw? That's very Un-Popperian. Really? Care to give a quote?
2Perplexed14y
It is a reasonable interpretation of the "three major schools" analysis down near the bottom of the "What is CR" page at the "Critical Rationalism" website. See if you can talk someone into cleaning up that bit of enthusiasm. As they say "It's not helping".
1[anonymous]14y
That's a really high standard.
1Perplexed14y
Hmmm. I never thought of that.
0timtyler14y
If you go as far as: http://groups.yahoo.com/group/CriticalRationalism/ ...you may see some names you recognise.
1Perplexed14y
LOL. That made my day. Be sure to let me know if you run across TH anywhere. Incidentally, have you looked in at sbe recently? Pretty sad.
-1[anonymous]14y
I don't see any people here that know both. Eliezer doesn't appear to either. See here and here.
-1[anonymous]14y
From the problem-situation. Theories arise out of problems.
3Perplexed14y
And where do problems come from in advance of theories and obs... Never mind. Someone else can carry on. I have other things to attend to.
2khafra14y
A better phrasing for that might have been "certain knowledge is a myth." What cannot be logically justified is reasoning from particular observations to certainty in universal truths. You're commenting as if you are unaware of the positions and arguments linked from my previous reply, and perhaps Where Recursive Justification Hits Bottom . You have intelligent things to say, but you're not going to be taken seriously here if you're not aware of the pre-existing intelligent responses to them popular enough to amount to public knowledge.
-2[anonymous]14y
No, that is not equivalent. Popper wrote that "inference based on many observations is a myth". He is saying that we never reason from observations, never mind reasoning to certainty. In order to observe, you need theories. Without those, you cannot know what things you should observe or even make sense of any observation. Observation enables us to test theories, it never enables us to construct theories. Furthermore, Popper throws out the whole idea of justifying theories. We don't need justification at all to progress. Judging from Where Recursive Justification Hits Bottom, this is something Eliezer has not fully taken on board (though I may be wrong). He sees the problem of the tu-quoque, but he still says [e]verything, without exception, needs justification. No, nothing can be justified. Knowledge advances not positively by justifying things but negatively by refuting things. Eliezer does see the importance of criticism, but my impression is that he doesn't know Popper well enough.
4timtyler14y
For Yudkowsky on Popper, start here: "Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning." * http://yudkowsky.net/rational/bayes ...and keep reading - at least as far as: "On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued."
3[anonymous]14y
Yudhowsky gets a lot wrong even in a few sentences: First, Popper's philosophy cannot be accurately described as falsificationism - that is just a component of it, and not the most important component. Popperian philosophy consists of many inter-related ideas and arguments. Yudhowsky makes an error that Popperian newbies make. One suspects from this that Yudhowsky is making himself out to be more familiar with Popper than he actually is. His claim to be dethroning Popper would then be dishonest as he does not have detailed knowledge of the rival position. Also he is wrong that Popper is popular: he isn't. Furthermore, Popper is familiar with Bayesian epistemology and actually discusses it in his books. So calling Popper's philosophy old and making out that Bayesian epistemology is new is wrong also. Popper never said theories can be definitely falsified. He was a thoroughgoing fallibilist and viewed falsifications as fallible conjectures. Also he said that theories can never be confirmed at all, not that they can be partially or probabilistically confirmed, which the above sentence suggests he said. Saying falsification is a special case of the Bayesian rules also doesn't make sense: falsification is anti-induction whereas Bayesian epistemology is pro-induction.
-2[anonymous]14y
Further comments on Yudhowski's explanation of Bayes: Science revolves around explanation and criticism. Most scientific ideas never get to the point of testing (which is a form of criticism), they are rejected via criticism alone. And they are rejected because they are bad explanations. Why is the emphasis in the quote solely on evidence? If science is a special case of Bayes, shouldn't Bayes have something to say about explanation and criticism? Do you assign probabilities to criticism? That seems silly. Explanations and criticism enable us to understand things and to see why they might be true or false. Trying to reduce things to probabilities is to completely ignore the substance of explanations and criticisms. Instead of trying to get a probability that something is true, you should look for criticisms. You accept as tentatively true anything that is currently unproblematic and reject as tentatively false anything that is currently problematic. It's a boolean decision: problematic or unproblematic.
8whpearson14y
Both bayesian induction (as we currently know it) and Popper fail my test for a complete epistemology. The test is simple. Can I use the description of the formalism to program a real computer to do science? And it should, in theory, be able to bootstrap itself from no knowledge of science to our level.
6timtyler14y
If you were asked to bet on whether it was true or not, then you should assign a probability. Scientists often do something like that when deciding how to allocate their research funds.
0[anonymous]14y
But then we have to develop a quantitative formalism for both beliefs and utilities. Is it really necessary to attack both problems at once?
3[anonymous]14y
Human beings don't actually seem to have utility functions, all they really have are "preferences" i.e. a method for choosing between alternatives. But von Neumann and Morgenstern showed that under some conditions this is the same as having a utility function. Now Scurfield is saying that human beings, even smart ones like scientists, don't have prior probability distributions, all they really have is a database of claims and criticisms of those claims. Is there any result analogous to von Neumann-Morgenstern that says this is the same thing as having a prior, under conditions?
5Perplexed14y
Yes. The question has been addressed repeatedly by a variety of people. John Maynard Keynes may have been the first. Notable formulations since his include de Finetti, Savage, and Jeffrey's online book. Discovering subjective probabilities is usually done in conjunction with discovering utilities by revealed preferences because much of the machinery (choices between alternatives, lotteries) is shared between the two problems. People like Jaynes who want a pure epistemology uncontaminated by crass utility considerations have to demand that their "test subjects" adhere to some fairly hard-to-justify consistency rules. But people like de Finetti don't impose arbitrary consistency, instead they prove that inconsistent probability assignments lose money to clever gamblers who construct "Dutch books".
0Cyan14y
I'd be interested in reading more about your views on this (unless you're referring to Halpern's papers on Cox's theorem).
1Perplexed14y
I'm not even familiar with Halpern's work. The only serious criticism I have seen regarding the usual consistency rules for subjective probabilities dealt with the "sure thing rule". I didn't find it particularly convincing. No, I have no trouble justifying a mathematical argument in favor of this kind of consistency. But not everyone else is all that convinced by mathematics. Their attention can be grabbed, however, by the danger of being taken to the cleaners by Dutch book professional bookies. One of these days, I will get around to producing a posting on probability, developing it from what I call the "surprisal" of a proposition - the amount, on a scale from zero to positive infinity, by which you would be surprised upon learning that a proposition is true. * Prob(X) = 2^(-Surp(X)). * Surp(coin flip yields heads)= 1 bit. * Surp(A) + Surp(B|A) = Surp(A&B) That last formula strikes me as particularly easy to justify (surprisals are additive). Given that and the first formula, you can easily derive Bayes law. The middle formula simply fixes the scale for surprisals. I suppose we also need a rule that Surp(True)=0
0Sniffnoy14y
Actually "Surprisal" is a pretty standard term, I think.
0[anonymous]14y
Yudkowsky suggests calling it "absurdity" here
1Perplexed14y
Cool! Saves me the trouble of writing that posting. :) Absurdity is probably a better name for the concept. Except that it sounds objective, whereas amount of surprise more obviously depends on who is being surprised.
0[anonymous]14y
Wild. Is there an exposition of subjective expected utility better than wikipedia's?
1Perplexed14y
Jeffrey's book, which I already linked, or any good text on Game theory. Myerson, for example, or Luce and Raiffa.
1timtyler14y
Agents can reasonably be expected to quantify both beliefs and utilities. How the ability to do that is developed - is up to the developer.
0[anonymous]14y
People are agents, and they are very bad at quantifying their beliefs and utilities.
5Perplexed14y
I think that the contribution that Bayesian methodology makes toward good criticism of a scientific hypothesis is that to "do the math", you need to be able to compute P(E|H). If H is a bad explanation, you will notice this when you try to determine (before you see E) how you would go about computing P(E|H). Alternately, you discover it when you try to imagine some E such that P(E|H) is different from P(E|not H). No, you don't assign probabilities to criticisms, as such. But I do think that every atomic criticism of a hypothesis H contains at its heart a conditional proposition of the form (E|H) or else a likelihood odds ratio P(E|H)/P(E|not H) together with a challenge, "So how would you go about calculating that?" Incidentally, you also ought to look at some of the earlier postings where EY was, in effect, using naive Bayes classifiers to classify (i.e. create ontologies), rather than using Bayes's theorem to evaluate hypotheses that predict. Also take a look at Pearl's book to get a modern Bayesian view of what explanation is all about.
2[anonymous]14y
I like this point a lot. But it seems very convenient and sensible to say that some things are more problematic than others. And at least for certain kinds of claims it's possible to quantify how problematic they are with numbers. This leads one (me at least) to want a formalism -- for handling beliefs -- that involves numbers, and Bayesianism is a good one. What's the conjectures-and-refutations way of handling claims like "it's going to snow in February"? Do you think it's meaningless or useless to attach a probability to that claim?
1[anonymous]14y
There is no problem with theories that make probabilistic predictions. But getting a probabilistic prediction is not tantamount to assigning a probability to the theory that made the prediction.
2Perplexed14y
True. But you seem to be assuming that a "theory" has to be a universal law of nature. You are too attached to physics. But in other sciences, you can have a theory which is quite explanatory, but is not in any sense a "law", but rather it is an event. Examples: * the theory that the moon was formed by a collision between the earth and a Mars-sized planetesimal. * the theory that modern man originated in Africa within the past 200,000 years and that the Homo erectus population outside of Africa did not contribute to our ancestry. * the theory that Napolean was poisoned with arsenic in St. Helena. * the "aquatic ape theory" * the endosymbiotic theory of the origin of mitochondria * the theory that the Chinese discovered America in 1421. Probabilities can be assigned to these theories. And even for universal theories, you can talk about the relative odds of competing theories being correct - say between a supersymetric GUT based on E6 and one based on E8. (Notice, I said "talk about the odds", not "calculate them") And you can definitely calculate how much one particular experimental result shifts those odds.
3[anonymous]14y
As you pointed out earlier, we have two ostensibly different ways of investigating the theory that the Chinese discovered America in 1421: the Popperian way, in which this theory and alternatives to it are criticized. And the Bayesian way, in which those criticisms are broken down into atomic criticisms, and likelihood ratios are attached and multiplied. I've seen plenty of rigorous Popperian discussions but not very many very rigorous -- or even modestly rigorous -- Bayesian discussions, even on this website. One piece of evidence for the China-discovered-America theory is some business about old Chinese maps. How does a Bayesian go about estimating the likelihood ratio P(China discovered America | old maps) / P(China discovered America | no old maps)?
2Perplexed14y
I think you want to ask about P(maps|discover) / P(no maps|discover). Unless both wikipedia and my intuition are wrong. Does catching you in this error relieve me of the responsibility of answering the question? I hope so. Because I would want to instead argue using something like P(maps|discover) vs P(maps|not discover). That doesn't take you all the way to P(discover), but it does at least give you a way to assess the evidential weight of the map evidence.
0[anonymous]14y
Now P(Sewing-Machine is a phony) = ? Here's another personal example of Bayesianism in action. Do you have a sense of how much you updated by? P(Richard Dawkins praises Steven Pinker | EP is bunk)/ P(Richard Dawkins praises Steven Pinker | EP is not bunk) is .5? .999? Any idea?
1Perplexed14y
P("Sewing Machine" is a nym) = 1.0 P(Sewing Machine has been disingenuous) = 0.5 and rising P(Dawkins praises Pinker|EP is not bunk) is ill defined because P(EP is not bunk) = ~0 but I have updated P(Dawkins believes EP is not bunk) to at least 0.5
0[anonymous]14y
I don't know what "disingenuous" means.
-5[anonymous]14y
-5curi13y
1timtyler14y
More from Yudkowsky on the philosophy of science: http://lesswrong.com/lw/ig/i_defy_the_data/
1timtyler14y
The chance of a criticism being correct can unproblematically be assigned a probability.
-3[anonymous]14y
A criticism can have many components, some of which are correct and some of which are incorrect. Breaking a criticism down into its components can be difficult/problematic. Edit: The way I put that sounds stupid. Let me try again: occasionally, a pair of math papers are released, one purports to prove a conjecture, and one purports to disprove it. The authors then criticize each others papers (let's say). Would you really characterize the task of assigning probabilities in this situation as "unproblematic"?
1timtyler14y
The point is that - if you were asked to bet on the criticism being correct - you would come up with some odds ratio.
2Perplexed14y
Maybe you would do that. I would instead bog down in a discussion of whether the criticism was a nitpick or a "real" criticism. But I would be interested to see what odds ratio you come up with for this criticism being correct.
1timtyler14y
Heh - is that your criticism? - or did you get it from Douglas Hofstadter? ;-)
-1[anonymous]14y
And in the math papers example, how exactly are you going to do that? Presumably you are going to go through the papers and the criticisms in detail and evaluate the content. And when you do that you are going to think of reasons why one is right and the other wrong. And then probabilities become irrelevent. It's your understanding of the content that will enable you to choose.
-1timtyler14y
Right - but you don't "choose" - you assign probabilities. Rejecting something completely would be bad - because of: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/
-1[anonymous]14y
I don't think anyone is falling into this trap. It sounds like the Popperian version is replacing "true" and "false" by "tentatively true" and "tentatively false."
1timtyler14y
"Tentatively true" and "tentatively false" sound a lot like probabilities which are not expressed in a format which is compatible with Bayes rule. It is hard to see how that adds anything - but rather easy to see how it subtracts the ability to quantitatively analyse problems.
0[anonymous]14y
That's what I said. Edit: That refers to the first sentence only.
-1[anonymous]14y
Theories are either true or false. The word "tentative" is there as an expression of fallibility. We cannot know if a theory is in fact true: it may contain problems that we do not yet know about. All knowledge is tentative. The word is not intended as a synonym for probability or to convey anything about probabilities.
-1timtyler14y
Observers can put probabilities on the truth of theories. They can do it - and will do it - if you ask them to set odds and prepare to receive bets. Quantifying uncertainty allows it to be measured and processed. It is true that knowledge is fallible - but some knowledge is more fallible than others - and if you can't measure degrees of uncertainty, you will never develop a quantitative treatment of the subject. Philosophers of science realised this long ago - and developed a useful framework for quantifying uncertainty.
1Perplexed14y
Scurfield missed his chance here. He should have asked when it becomes the case that those bets must be paid off, and offered the services of a Popper adept to make that kind of decision. Of course, the Popperite doesn't rule that one theory is true, he rules that the other theory is refuted.
0timtyler14y
Short time limits don't mean that agents can't meaningfully assign probabilities to the truth of scientific theories - they just decrease the chances of the theories being proven wrong within the time limit a bit.
1Perplexed14y
What is a time limit? Do actual bets on this sort of thing in Britain stipulate a time limit? As a Yank, I have no idea how betting 'markets' like this like this actually work.
0gwern14y
Prediction markets/betting markets like Intrade or Betfair pretty universally set time limits on their bets. (Browse through Intrade sometime.) This does sometimes require changing the bet/prediction though - from 'the Higgs boson will be found' to 'the Higgs boson will be found by 2020'. Not that this is a bad thing, mind you.
0[anonymous]14y
Do you have an answer to that point-that-should-have-been?
0Perplexed14y
Not really. To the extent that we limit attention to theories of the form: we Bayesians can never "cash in" on a bet that the theory is true - at least not using empirical evidence. All we can do is to continue trying to falsify the theory by experiments at more times, at more places, and for more values of x. As Popper prescribes. Our probabilities that the theory is true grow higher and higher, but they grow more and more slowly, and they can never reach unity. However, both Bayesians and Popper fans can become pretty certain that such a theory is false - even without checking everywhere, everywhen, and forall x. Popper does not have a monopoly on refutations. Or conjectures either, for that matter.
-7[anonymous]14y
0[anonymous]14y
I still recommend Subjectively Objective, but I'm no longer confident that your inferential distance from the coverage there is small enough. Perplexed's recommendation to read all the way through the sequences, or--even better--ET Jaynes' Probability Theory: The Logic of Science--may be necessary. As he's said, Critical Rationalism was an important step in the philosophy of science--but the field has moved beyond that to a rigorous, mathematically precise model of the amount of belief any rational agent must hold given identical priors and the same evidence--Popper's Vs(a)=CT(a)-CTf(a) is not quantitative in this way. That wasn't intended to convince you; if you truly wish to subject your conjecture to criticism a contemplative reading of Jaynes is necessary. If you do happen to find Jaynes convincing, all is not lost--we still like Tarski here.
1timtyler14y
Popper obviously hadn't read Wikipedia: http://en.wikipedia.org/wiki/Inductive_reasoning
3Larks14y
In what sense do you mean this exactly, and what evidence for it do you have? I've spoken to people like Elliot, but all they said was things like 'humans can function as a Turing Machine by laboureously manipulating symbols'. Which is nice, but not really relevant to anything in real-time. On a more general note, you should probably try to be a little clearer: 'conjectures and refutations' doesn't really pick out any particular strategy from strategy-space, and neither does the phrase 'explanation' pick out anything in particular. Additionally, 'induction' is sufficiently different from what people normally think of as myths that it could do with some elaboration. Similarly, some of these issues we do take seriously; we know we're fallible, and it sounds like you don't know what we mean by probability. Finally, welcome to Less Wrong! Edit: People, don't downvote the parent; there's no reason to scare the newbies.
3wedrifid14y
Where 'real-time' can be taken literally to refer to time that is expected to exist in physics models of the universe.
-3[anonymous]14y
Another way of saying it is that human beings can solve any problem that can be solved. Does that help? Careful here - as I mentioned above, evidence never supports a theory, it just provides a ready stock of criticisms of rival theories. Let me give you an argument: If you hold that human beings are not universal knowledge creators, then you are saying that human knowledge creation processes are limited in some way, that there is some knowledge we cannot create. You are saying that humans can create a whole bunch of knowledge but whole realms of other knowledge are off limits to us. How does that work? Knowledge enables us to expand our abilities and that in turn enables us to create new knowledge and so on. Whatever this knowledge we can't create is, it would have to be walled off from all this other expanding knowledge in a rather special way. How do you build a knowledge creation machine that only has the capability to create some knowledge? That would seem much much more difficult than creating a fully universal machine. I don't know what point Elliot was answering here, but I guess he is saying that humans are universal Turing Machines and illustrating that. He is saying that humans are universal in the sense that they can compute anything that can be computed. That is a different notion of universality to the one under discussion here (though there is a connection between the two types of universality). Elliot agrees that humans are universal knowledge creators and has written a lot about it (see, for example, his posts on The Fabric of Reality list). 'Conjectures and refutations' is an evolutionary process. The general methodology (or strategy, if you prefer) is: When faced with a problem try to come up with conjectural explanations to solve the problem and then criticise them until you find one (and only one) that cannot be knocked down by any known criticism. Take that as your tentative solution. I guess what you are looking for is an explanation of how
2Larks14y
What about the problem of building pyramids on alpha-centuri by 2012? We can't, but aliens living there could. More pressingly though, I don't see why this is important. Have we been basing our arguments on an assumption that there are problems we can't solve? Is there any evidence we can solve all problems without access to arbitrarily large amounts of computational power? Something like AIXI can solve pretty much anything, but not relevantly. How about a neural network that can't learn XOR? The manner in which explanations are knocked down seems under-specified, if you're not doing Bayesian updating. Nope, I just don't know what in particular you mean by 'explanation'. I know what the word means in general, but not your specific conception. Well, that's different from there being no such thing as a probability that a theory is true: your initial assertion implied that the concept wasn't well defined, whereas now you just mean it's irrelevant. Either way, you should probably produce some actual arguments against Jaynes's conception of probability. Meta: You want to reply directly to a post, not its descendants, or the other person won't get a notification. I only saw your post via the Recent Posts list. Also, it's no good telling people that they can't use evidence to support their position because it contradicts your theory when the other people haven't been convinced of your theory.
1[anonymous]14y
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw? In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn't it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities - a task that is surely fraught with difficulty for any realistic idea of criticism? You're adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory. My conception is the same as the general one.
2Larks14y
You don't seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you're essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god - then obviously it would be a problem if we didn't bare in mind the stuff. But without providing evidence that we succumb to these faults, it's hard to see what the problem is. Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn't increase subjective plausibility, these would be important claims.
1timtyler14y
Of course evidence makes theories more probable: Imagine you have two large opaque bags full of beans, one 50% black beans and 50% white beans and the other full of white beans. The bags are well shaken, you are given one bag at random. You take out 20 beans - and they are all white. That is clearly evidence that confirms the hypothesis that you have the bag full of white beans. If you had the "mixed" bag, that would only happen one time in a million.
1[anonymous]14y
Notice that the counterfactual event is possible (that you have the mixed bag). And even if you hold the bag full of white beans, the counterfactual event that you hold the mixed beans does occur elsewhere in the multiverse. This is what distinguishes events from theories. A false theory never obtains anywhere: it is simply false. So a theory being true or false is not at all like the situation with counterfactual events. You cannot assign anything objective to a false theory. The actual theory you hold in your example is approximately the following: I have made a random selection from a bag and I know that I have been given one of two bags: one 50% black beans and 50% white beans and the other full of white beans and: I have been honestly informed about the setup, am not being tricked, no mistakes have been made etc. This theory predicts that if I take 20 white beans out of the bag, then the chance of that would be one in a million if I had the mixed bag. Do you understand? The real situation is that you have a theory that is making probabilistic predictions about events and, as I have said several times, I have no problem with probabilistic predictions of theories about events.
4timtyler14y
As briefly as possible: Firstly, this seems like a step forwards to me. You seem to agree that induction and confirmation are fine 90% of the time. You seem to agree that these ideas work in practice - and are useful - including in some realms of knowledge - such as knowledge relating to which bag is in front of you in the above example. This puts your anti-induction and anti-confirmation statements into a rather different light, IMO. Confirmation theory has nothing to do with multiverses. It applies equally well for agents in single deterministic universes - such as can be modelled by cellular automata. So, reasoning that depends on the details of multiverse theories is broken from the outset. Imagine evidence for wavefunction collapse was found. Not terribly likely - but it could happen - and you don't want your whole theory of epistemology to break if it does! Treating uncertainty about theories and uncertainty about events differently is a philosophical mistake. There is absolutely no reason to do it - and it gets people into all kinds of muddles. We have a beautiful theory of subjective uncertainty that applies equally well to uncertainty about any belief - whether it refers to events, or scientific theories. You can't really tease these categories apart anyway - since many events are contingent upon the truth of scientific theories - e.g. Higgs boson observations. Events are how physical law is known to us. Instead of using one theory for hypotheses about events and another for hypotheses about universal laws you should - according to Occam's razor - be treating them in the same way - and be using the same underlying general theory that covers all uncertain knowledge - namely the laws of subjective probability. "Bayesian Confirmation Theory" http://plato.stanford.edu/entries/epistemology-bayesian/#BayTheBayConThe
1[anonymous]14y
Tim - In the example we have been discussing, no confirmation of the actual theory (the one I gave in approximate outline) happens. The actual theory makes probabilistic predictions about events (it also makes non-probabilistic predictions) and tells you how to bet. Getting 20 white beans doesn't make the actual theory any more probable - the probability was a prediction of the theory. Note also that a theory that you are being tricked might recommend that you choose the mixed bag when you get 20 white beans. Lots of theories are consistent with the evidence. What you need to look for is things to refute the possible theories. If you are concerned with confirmation, then the con man wins. So I am not agreeing that induction and confirmation are fine any percentage of the time (how did you get that 90% figure?). When you consider the actual possible theories of the example, all that is happening is that you have explanatory theories that make predictions, some probabilistic, and that tell you how to bet. The theories are not being induced from evidence and no confirmation takes place. You haven't explained how we assign objective probabilities to theories that are false in all worlds.
7Pavitra14y
We don't assign objective probabilities, full stop.
3khafra14y
What you're talking about here is a strategy for avoiding bias which Bayesians also use. It is not a fundamental feature of any particular epistemology.
1timtyler14y
I think you are too lost for me :-( You don't seem to address the idea that multiverse theories are an irrelevance - and that in a single deterministic automaton, things work just the same way. Indeed, scientists don't even know which (If any) laws of physics are true everywhere, and which depend on the world you are in. You don't seem to address the idea that we have a nice general theory that covers all kinds of uncertainty, and that no extra theory to deal with uncertainty about scientific hypotheses is needed. If you don't class hypotheses about events as being "theories", then I think you need to look at: http://en.wikipedia.org/wiki/Scientific_theory Also, your challenge doesn't seem to make much sense. The things people assign probabilities to are things they are uncertain about. If you tell me a theory is wrong, it gets assigned a low probability. The interesting cases are ones where we don't yet know the answer - like the clay theory of the origin of life, the orbital inclination theory of glacial cycles - and so on. Distinguishing between scientific theories and events in the way that you do apparently makes little sense. Events depend on scientific theories. Scientific theories predict events. Every test of a scientific theory is an event. Observing the perihelion precession of Mercury was an event. The observation of the deflection of light by the Sun during an eclipse was an event. If you have probabilities about events which are tests of scientific theories, then you automatically wind up with probabilities about the theories that depend on their outcome. Basically agents have probabilities about all their beliefs. That is Bayes 101. If an agent claims not to have a probability about some belief, you can usually set up a bet which reveals what they actually think about the subject. Matters of fundamental physics are not different from "what type of beans are in a bag" - in that respect.
2[anonymous]14y
Yes, scientific theories predict events. So there is a distinction between events and theories right? If the event is observed to occur, all that happens is that rival theories that do not predict the event are refuted. The theory that predicted the event is not made truer (it already is either true or false). And there are always an infinite number of other theories that predict the same event. So observing the event doesn't allow you to distinguish among those theories. In the bean bag example you seem to think that the rival theories are "the bag I am holding is mixed" and "the bag I am holding is all white". But what you actually have is a single theory that makes predictions about these two possible events. That theory says you have a one-in-a-million chance of holding the mixed bag. No, General Relativity being true or false is not like holding a bag of white beans or holding a bag of mixed beans. The latter are events that can and do obtain: They happen. But GR is not true in some universes and false in others. It is either true or false. Everywhere. Furthermore, we accept GR not because it is judged most likely but because it is the best explanation we have. Popperians claim that we don't need any theory of uncertainty to explain how knowledge grows: uncertainty is irrelevant. That is an interesting claim don't you think? And if you care about the future of humanity, it is a claim that you should take seriously and try to understand. If you are still confused about my position, why don't you try posting some questions on one of the following lists: http://groups.yahoo.com/group/Fabric-of-Reality/ http://groups.yahoo.com/group/criticalrationalism/ It might be useful for other Popperians to explain the position - perhaps I am being unclear in some way. Edit: Just because people might be willing to place bets is no argument that the epistemological point I am making is wrong. What makes those people infallible authorities on epistemology? Also, if I acc
1Pavitra14y
That's a really powerful general argument against Bayesianism that I hadn't considered before: any prior (edit: I should have said "prior information") necessarily constitutes a hypothesis in which you have confidence 1.
3Sniffnoy14y
I don't think that statement makes sense; you seem to be mixing levels - the prior is a distribution over how the world could actually be, not over other distributions. It shouldn't make sense to speak of your prior's confidence in itself.
1[anonymous]14y
You have an explanatory theory that makes predictions about the events, but it is not the only possible explanatory theory. If someone offers to play the bean bag game with you on the street, then things might not be as they seem and your theory would be no good as an explanation of how to bet. Science is like that - what is actually going on might not be what you think, so you look for flaws and realize that one's confidence is no guide to the truth.
1khafra14y
If your confidence in your prior were 1, you would never be able to update it. But, it is true that if your prior distribution of probabilities over various hypotheses assigns 0 or 1 probability to a group of hypotheses, you will never be able to accrue enough evidence to change that. This is not a weakness of Bayesianism, because there is no other method of reasoning which will allow you to end up on a conclusion which you at no point considered as a possibility.
1Pavitra14y
Did you read the quoted text? Inability to update is the whole point of my concern; but it in no way implies that my confidence in a particular outcome will never change. Perhaps you're confusing probabilities for priors. (edit: I was misusing my terms: I meant "prior probabilities" and "prior information" respectively.)
2Perplexed14y
I think that the problem is that EY has introduced non-standard terminology here. Worse, he blames it on Jaynes, who makes no such mistake. I just looked it up. There are two concepts here which must not be confused. * a priori information, aka prior information, aka background information * prior probabilities, aka priors (by everyone except EY. Jaynes dislikes this but acquiesces). Prior information does indeed constitute a hypothesis in which you have complete confidence. I agree this is something of a weakness - a weakness which is recognized implicitly in such folklore as "Cromwell's rule" Prior information cannot be updated. Prior probabilities (frequently known simply as priors) can be updated. In a sense, being updated is their whole purpose in life.
0Pavitra14y
This is exactly what's going on. Thank you. I apologize for my confused terminology.
2Perplexed14y
You are welcome. Unfortunately, I was wrong. Or at least incomplete. I misinterpreted what EY was saying in the posting you cited. He was not, as I mistakenly assumed, saying that prior probabilities should not be called priors. He was instead talking about a third kind of entity which should not be confused with either of the other two. * Prior distributions over hypotheses, which Eliezer wishes to call simply "priors" But there is not a confusion with referring to both prior probabilities and prior distributions as simply priors because a prior probability is simply a special case of a prior distribution. A probability is simply a distribution over a set of two competing hypotheses - only one of which can be true. Bayes theorem in its usual form applies only to simple prior probabilities. It tells you how to update the probability. In order to update a prior distribution, you effectively need to use Bayes's theorem multiple times - once for each hypothesis in your set of hypotheses. So what is that 1/2 number which Eliezer says is definitely not a prior? It is none of the above three things. It is something harder to describe. A statistic over a distribution. I am not even going to try to explain what that means. Sorry for any confusion I may have created. And thx to Sniffnoy and timtyler for calling my attention to my mistake.
0Pavitra14y
I'm not convinced that there's a meaningful difference between prior distributions and prior probabilities. Going back to the beans problem, we have this: This can easily be "flattened" into a single, more complex, probability distribution: If we wish to consider multiple draws, we can again flatten the total event into a single distribution: Translating the "what is that number" question into this situation, we can ask: what do we mean when we say that we are 5/8 sure that we will draw two white beans? I would say that it is a confidence; the "event" that has 5/8 probability is a partial event, a lossy description of the total event.
3Perplexed14y
There isn't when you have only two competing hypotheses. Add a third hypothesis and you really do have to work with distributions. Chapter 4 of Jaynes explains this wonderfully. It is a long chapter, but fully worth the effort. But the issue is also nicely captured by your own analysis. As you show, any possible linear combination of the two hypotheses can be characterized by a single parameter, which is itself the probability that the next ball will be white. But when you have three hypotheses, you have two degrees of freedom. A single probability number no longer captures all there is to be said about what you know.
0Pavitra14y
In retrospect, it's obvious that "probability" should refer to a real scalar on the interval [0,1].
0timtyler14y
Everyone calls prior probabilities "priors" - including: http://yudkowsky.net/rational/bayes
0timtyler14y
Uh, what? No it doesn't. If your confidence in your priors was that high, they would never shift.
1timtyler14y
Popper's views are out of date. I am somewhat curious about why anyone with access to the relevant information would fail to update their views - but that phenomenon is not that interesting. People fail to update all the time for a bunch of sociological reasons. Check with the terms of the bet. Or... Consider bets on when a bridge will fail. It might never fail - and if so, good for the bridge. However, if traders think it has a 50% chance of surviving to the end of the year, that tells you something. The market value of the bet gives us useful information about the expected lifespan of the bridge. It is just the same with scientific theories.
0wnoise14y
I claim that the distinction you make between events and theories is not nearly so clear-cut as you seem to think. You have already made the point that distinguishing between two or more apparent theories can readily be replaced by a parameterized theory. You restrict yourself to to the case where the parameterization is due to an "event". I think most such cases can be tortured into such a view, particularly with your multiverse model. One of the earliest uses of probability theory was Laplace's use in estimating orbital parameters for Jupiter and Saturn. If you take these parameters as themselves the theory, you would view it as illegitimate. If they are more akin to events, this seems fine. But your conception of events as "realizable" differently in the multiverse (i.e. all probabilities should be seen as indicial uncertainty) seems to be greatly underspecified. Given your example of GR as a theory rather than an event, why don't you want to accept a multiverse model where GR really could hold in some universes, but not others? And of course, there's a foundational issue that whatever multiverse model you take for events is itself a theory.
1[anonymous]14y
By multiverse I mean the everyday Everett/Deutsch one. I agree that the argument is a meta-theory about events and theories and that that meta-theory, like any theory, could have flaws.
0[anonymous]13y
Elliot has informed me that he doesn't think he said: "humans can function as a Turing Machine by laboriously manipulating symbols", except possibly in reply to a very specific question like "Give a short proof that humans have computational universality". Why do you say "people like Ellliot"? Elliot has his own views on things and shouldn't be conflated with people who you think are like him. It seems to me you don't understand his ideas so wouldn't know what the people who are like him are like.
3thomblake14y
For interesting definitions of 'can', perhaps. I know some humans who can't create much of anything. I'm not sure that counts as a 'way of creating knowledge'. 'Conjectures' sounds to me like a black box which would itself contain the relevant bit. I'd want to know what you mean by 'myth'. It's worked so far, though that only counts as evidence for those of us blinded by the veil of Maya. Probability is in the mind. Theories are either true or false, and there is such a thing as the probability that a theory is true. I'm not sure what you mean by that. This shows the remarks about 'probability' above to be merely a definitional dispute. Probability describes uncertainty, and you admit that we have uncertain knowledge. True that! Welcome to Less Wrong ETA: Reminder that we have a rough community norm against downvoting first posts when they seem to be in good faith.
0[anonymous]14y
All human beings create knowledge - masses of it. Certain ideas can and do impair a person's creativity, but it is always possible to learn and to change one's ideas. It's not just conjectures, it's "conjectures and refutations". Knowledge is created by advancing conjectural explanations to solve a problem and then criticizing those conjectures in an attempt to refute them. The goal is to find a conjecture that can withstand all criticisms we can think of and to refute all rival conjectures. No, it never worked. Not a bit. That's what I mean by myth. Theories are objective. Whether you think a theory is true or false has no bearing on whether it is in fact true or false. Moreover, how do you assign a probability to a complex real-world theory like, say, multiversal quantum mechanics? What counts is whether the theory has stood up to criticism as an explanation to a problem or set of problems. If it has, who cares about how probable you think it is? It's not the probability that you should care about, it's the explanation. Above all else, we should try to find explanations for things; explanations are the most important kind of knowledge. Knowledge is always uncertain, yes, but it is impossible to objectively quantify the uncertainty. Put another way, you cannot know what you do not yet know. Theories can be wrong in all sorts of ways but you have no way of doing in advance how or if a theory will go wrong. It's not a definitional dispute. OK, we agree on that!
3khafra14y
Probability is subjectively objective. All conjectures/models are wrong, but some are useful to the extent that they successfully constrain expected experience.

Reincarnation. It's a certral feature of randomness that events repeat if you simple have enough time.

If we live in a purely random multiverse which big bangs due to quantum fluctuations every 10^10^X years, given enough time we will be reborn after we die. Sure, most of the time you won't remember, but if you wait long enough you will get reincarnated atom-by-atom.

5Nick_Tarleton14y
What does taking this seriously imply?
6CronoDAS14y
Probably nothing.