I get that this is a consistent way of asking and answering questions, but I’m not sure this is actually helpful with doing science.
If, say, universes 1 and 2 contain TREE(3) copies of me while universes 3 and 4 contain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, unless I can somehow work out which of these vast numbers is vaster. Regular scientific inference is just going to completely ignore questions as odd as this, because it simply has to. It’s going to tell me that if measurements of background radiation keep coming out
...Thanks Stuart.
The difficulty is that, by construction, there are infinitely many copies of me in each universe (if the universes are all infinite) or there are a colossally huge number of copies of me in each universe, so big that it saturates my utility bounds (assuming that my utilities are finite and bounded, because if they’re not, the decision theory leads to chaotic results anyway).
So SIA is not an approach to anthropics (or science in general) which allows us to conclude we are probably in universe 1 or 2 (rather than 3 or 4). All SIA really says is
...Hi Stuart. It’s a while since I’ve posted.
Here’s one way of asking the question which does lead naturally to the Doomsday answer.
Consider two universes. They’re both infinite (or if you don’t like actual infinities, are very very large, so they both have a really huge number of civilisations).
In universe 1, almost all the civilisations die off before spreading through space, so that the average population of a civilisation through time is less than a trillion.
In universe 2, a fair proportion of the civilisations survive and grow to galaxy-size or bigger, s
...I think by "logical infallibility" you really mean "rigidity of goals" i.e. the AI is built so that it always pursues a fixed set of goals, precisely as originally coded, and has no capability to revise or modify those goals. It seems pretty clear that such "rigid goals" are dangerous unless the statement of goals is exactly in accordance with the designers' intentions and values (which is unlikely to be the case).
The problem is that an AI with "flexible" goals (ones which it can revise and re-write over time) is als...
Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place.
This was one of my problematic problems for TDT. I also discussed some Sneaky Strategies which could allow TDT, UDT or similar agents to beat the problem.
Presumably anything caused to exist by the AI (including copies, sub-agents, other AIs) would have to count as part of the power(AI) term? So this stops the AI spawning monsters which simply maximise U.
One problem is that any really valuable things (under U) are also likely to require high power. This could lead to an AI which knows how to cure cancer but won't tell anyone (because that will have a very high impact, hence a big power(AI) term). That situation is not going to be stable; the creators will find it irresistible to hack the U and get it to speak up.
I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.
I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.
Searching for features of human interest (like "le...
Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".
One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.
Taken survey.
If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material ("computronium") and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).
Have I misunderstood?
One very simple resolution: observing a white shoe (or yellow banana, or indeed anything which is not a raven) very slightly increases the probability of the hypothesis "There are no ravens left to observe: you've seen all of them". Under the assumption that all observed ravens were black, this "seen-em-all" hypothesis then clearly implies "All ravens are black". So non-ravens are very mild evidence for the universal blackness of ravens, and there is no paradox after all.
I find this resolution quite intuitive.
P.S. If I draw one supportive conclusion from this discussion, it is that long-range climate forecasts are very likely to be wrong, simply because the inputs (radiative forcings) are impossible to forecast with any degree of accuracy.
Even if we'd had perfect GCMs in 1900, forecasts for the 20th century would likely have been very wrong: no one could have predicted the relative balance of CO2, other greenhouse gases and sulfates/aerosols (e.g. no-one could have guessed the pattern of sudden sulfates growth after the 1940s, followed by levelling off after t...
Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.
There are certainly periods when temperatures moved in a negative direction (1940s-1970s), but then the radiative forcings over those periods (combination of natural and anthropogenic) were also negative. So climate models would also predict declining temperatures, which indeed is what they do "retrodict&...
Thanks for a comprehensive summary - that was helpful.
It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists ...
On Critique #1:
Since you are using Real Climate and Skeptical Science as sources, did you read what they had to say about the Armstrong and Green paper and about Nate Silver's chapter?
Gavin Schmidt's post was short, funny but rude; however ChrisC's comment looks much more damning if true. Is it true?
Here is Skeptical Science on Nate Silver. It seems the main cause of error in Hansen's early 1988 forecast was an assumed climate sensitivity greater than that of the more recent models and calculations (4.2 degrees rather than 3 degrees). Whereas IPCC's 1990...
Actually, Kepler is able to determine both size and mass of planet candidates, using the method of transit photometry.
For further info, I found a non-paywalled copy of Bucchave et al's Nature paper. Figure 3 plots planet radius against star metallicity, and some of the planets are clearly of Earth-radius or smaller. I very much doubt that it is possible to form gas "giants" of Earth size, and in any case they would have a mass much lower than Earth mass, so would stand out immediately.
It might do, except that the recent astronomical evidence is against that : solar systems with sufficient metallicity to form rocky planets were appearing within a couple of billion years after the Big Bang. See here for a review.
Hmmm... I'll have a go. One response is that the "fully general counter argument" is a true counter argument. You just used a clever rhetorical trick to stop us noticing that.
If what you are calling "efficiency" is not working for you, then you are - ahem - just not being very efficient! More revealingly, you have become fixated on the "forms" of efficiency (the metrics and tick boxes) and have lost track of the substance (adopting methods which take you closer to your true goals, rather than away from them). So you have steelmanned a criticism of formal efficiency, but not of actual efficiency.
Stephen McIntyre isn't a working climate scientist, but his criticism of Mann's statistical errors (which aren't necessarily relevant to the main arguments for AGW) have been acknowledged as essentially correct. I also took a reasonably detailed look at the specifics of the argument
Did you have a look at these responses? Or at Mann's book on the subject?
There are a number of points here, but the most compelling is that the statistical criticisms were simply irrelevant. Contrary to McIntyre and McKitrick's claims, the differences in principal component m...
On funding, it can be difficult to trace: see this article in Scientific American and the original paper plus the list of at least 91 climate counter-movement organisations, page 4, which have an annual income of over $900 million. A number of these organisations are known to have received funding by companies like Exxon and Koch Industries, though the recent trend appears to be more opaque funding through foundations and trusts.
On your particular sources, Climate Audit is on that list; also, from his Wikipedia bio it appears that Steve McIntyre was the fo...
I've noticed that you've listed a lot of secondary sources (books, blogs, IPCC summaries) but not primary sources (published papers by scientists in peer-reviewed journals). Is there a reason for this e.g. that you do not have access to the primary sources, or find them indigestible?
If you do need to rely on secondary sources, I'd suggest to focus on books and blogs whose authors are also producing the primary sources. Of the blogs you mention, I believe that Real Climate and Skeptical Science are largely authored by working climate scientists, whereas the...
This sort of scenario might work if Stage 1 takes a minimum of 12 billion years, so that life has to first evolve slowly in an early solar system, then hop to another solar system by panspermia, then continue to evolve for billions of years more until it reaches multicellularity and intelligence. In that case, almost all civilisations will be emerging about now (give or take a few hundred million years), and we are either the very first to emerge, or others have emerged too far away to have reached us yet. This seems contrived, but gets round the need for a late filter.
This all looks clever, apart from the fact that the AI becomes completely indifferent to arbitrary changes in its value system. The way you describe it, the AI will happily and uncomplainingly accept a switch from a friendly v (such as promoting human survival, welfare and settlement of Galaxy) to an almost arbitrary w (such as making paperclips), just by pushing the right "update" buttons. An immediate worry is about who will be in charge of the update routine, and what happens if they are corrupt or make a mistake: if the AI is friendly, then i...
Thanks.... Upvoted for honest admission of error.
Or moving from conspiracy land, big budget cuts to climate research starting in 2009 might have something to do with it.
P.S. Since you started this sub-thread and are clearly still following it, are you going to retract your claims that CRU predicted "no more snow in Britain" or that Hansen predicted Manhattan would be underwater by now? Or are you just going to re-introduce those snippets in a future conversation, and hope no-one checks?
Seems like a bad proxy to me. Is snowfall really that hard a metric to find...?
Presumably not, though since I'm not making up Met Office evidence (and don't have time to do my own analysis) I can only comment on the graphs which they themselves chose to plot in 2009. Snowfall was not one of those graphs (whereas it was in 2006).
However, the graphs of mean winter temperature, maximum winter temperature, and minimum winter temperature all point to the same trend as the air frost and heating-degree-day graphs. It would be surprising if numbers of days of ...
I'm sorry, but you are still making inaccurate claims about what CRU predicted and over what timescales.
The 20 year prediction referred specifically to heavy snow becoming unexpected and causing chaos when it happens. I see no reason at all to believe that will be false, or that it will have only a slim chance of being true.
The vague "few year" claim referred to snow becoming "rare and exciting". But arguably, that was already true in 2000 at the time of the article (which was indeed kind of the point of the article). So it's not necess...
P.S. On the more technical points, the 2009 reports do not appear to plot the number of days of snow cover or cold spells (unlike the 2006 report) so I simply referred to the closest proxies which are plotted.
The "filtering" is indeed a form of local smoothing transform (other parts of the report refer to decadal smoothing) and this would explains why the graphs stop in 2007, rather than 2009: you really need a few years either side of the plotted year to do the smoothing. I can't see any evidence that the decline in the 80s was somehow factored into the plot in the 2000s.
I'm sorry, I didn't realize 'within a few years' was so vague in English that it could easily embrace decades and I'm being tendentious in thinking that after 14 years we can safely call that prediction failed.
Got it - so the semantics of "a few years" is what you are basing the "failed prediction" claim on. Fair enough.
I have to say though that I read the "few years" part as an imprecise period relating to an imprecise qualitative prediction (that snow would become "rare and exciting"). Which as far as my family...
Sigh... The only dated prediction in the entire article related to 20 years, not 14 years, and the claim for 20 years was that snow would "probably" cause chaos then. Which you've just agreed is very likely to be true (based on some recent winters where some unexpected snow did cause chaos), but perhaps not that surprising (the quote did not in fact claim there would be more chaos than in the 1980s and 1990s).
All other claims had no specific dates, except to suggest generational changes (alluding to a coming generation of kids who would not have ...
What's the date?
By your reaction, and the selective down votes, I have apparently fallen asleep, it is the 2020s already, and a 20-year prediction is already falsified.
But in answer to your questions:
A) Heavy snow does indeed already cause chaos in England when it happens (just google the last few years)
B) My kids do indeed find snow a rare and exciting event (in fact there were zero days of snow here last winter, and only a few days the winter before)
C) While my kids do have a bit of firsthand knowledge of snow, it is vastly less than my own experience...
I think we have agreement that:
A) The newspaper headline "Snowfalls are now just a thing of the past" was incorrect
B) The Climatic Research Unit never actually made such a prediction
C) The only quoted statement with a timeline was for a period of 20 years, and spoke of heavy snow becoming rarer (rather than vanishing)
D) This was an extrapolation of a longer term trend, which continued into the early 2000s (using Met Office data published in 2006, of course after the Independent story)
E) It is impossible to use short periods (~10 years since 2006)...
"Over the 2000s" is certainly too short a period to reach significant conclusions. However the longer term trends are pretty clear. See this Met Office Report from 2006.
Figure 8 shows a big drop in the length of cold spells since the 1960s. Figure 13 shows the drop in annual days of snow cover. The trend looks consistent across the country.
Regarding the wine point, it is doubtful if wine grapes ever grew in Newfoundland, as the Norse term "Vinland" may well refer to a larger area. From the Wikipedia article:
...the southernmost limit of the Norse exploration remains a subject of intense speculation. Samuel Eliot Morison (1971) suggested the southern part of Newfoundland; Erik Wahlgren (1986) Miramichi Bay in New Brunswick; and Icelandic climate specialist Pall Bergthorsson (1997) proposed New York City.[26] The insistence in all the main historical sources that grapes were found in V
Reading your referenced article (Independent 2000):
Heavy snow will return occasionally, says Dr Viner, but when it does we will be unprepared. "We're really going to get caught out. Snow will probably cause chaos in 20 years time," he said.
Clearly the Climatic Research Unit was not predicting no more snow in Britain by 2014.
Regarding the alleged "West Side Highway underwater" prediction, see Skeptical Science. It appears Hansen's original prediction timeframe was 40 years not 20 years, and conditional on a doubling of CO2 by then.
Note that this also messes up counterfactual accounts of knowledge as in "A is true and I believe A; but if A were not true then I would not believe A". (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)
We likely need some notion of "reliability" or "reliable processes" in an account of knowledge, like "A is true and I believe A and my belief in A arises through a reliable process". Believing things through insanity is not a reliable process.
Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.
Except that acting to prevent other AIs from being built would also encroach on human liberty, and probably in a very major way if it was to be effective! The AI might conclude from this that liberty is a lost cause in the long run, but it is still better to have a few extra years of liberty (until the next AI gets built), rather than ending it right now (through its own powerful actions).
Other provocative questions: how much is liberty really a goal in human values (when taking the CEV for humanity as a whole, not just liberal intellectuals)? How much is ...
This also creates some interesting problems... Suppose a very powerful AI is given human liberty as a goal (or discovers that this is a goal using coherent extrapolated volition). Then it could quickly notice that its own existence is a serious threat to that goal, and promptly destroy itself!
One issue here is that worlds with an "almost-friendly" AI (one whose friendliness was botched in some respect) may end up looking like siren or marketing worlds.
In that case, worlds as bad as sirens will be rather too common in the search space (because AIs with botched friendliness are more likely than AIs with true friendliness) and a satisficing approach won't work.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else's position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn't perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the "revealed preference" approach to utility).
Ken Binmore has a rather good paper on this topic, see here.
OK, I also got a "non-cheat" solution: unfortunately, it is non-constructive and uses the Nkvbz bs Pubvpr, so it still feels like a bit of a cheat. Is there a solution which doesn't rely on that (or is it possible to show there is no solution in such a case?)
Oh dear, I suppose that rules out other "cheats" then: such as prisoner n guessing after n seconds. At any point in time, only finitely many have guessed, so only finitely many have guessed wrong. Hence the prisoners can never be executed. (Though they can never be released either.)
I suspect an April Fool:
Cevfbare a+1 gnxrf gur ung sebz cevfbare a naq chgf vg ba uvf bja urnq. Gura nyy cevfbaref (ncneg sebz cevfbare 1) thrff gur pbybe pbeerpgyl!
...As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative "drift" along the chain, so that the ends are very different from each other (and not counterparts).
UDT doesn't seem to work this way. In UDT, "you" are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in d
It is not the case if the money can be utilized in a manner with long term impact.
OK, I was using $ here as a proxy for utils, but technically you're right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don't know how to bet in utils, use another proxy which is a consumptive good and can't be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so ...
So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim?
It depends on the stakes of the best.
I thought we discussed an example earlier in the thread? The gambler pays $1000 if not in a simulation; the bookmaker pays $1 if the gambler is in a simulation. In terms of expected utility, it is better for "you" (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don...
I don't think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization.
So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim? But on the off-chance that you are not a sim, you're going to make decisions as if you were in the real world, because those decisions (when suitably generalized across a...
No, it can be located absolutely anywhere. However you're right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.
Ah, I see what you're getting at. If the vertex is at the Big Bang, then the shortest programs basically simulate a history of the observable universe. Just start from a description of the laws of physics and some (low entropy) initial conditions, then read in random bits whenever there is an increase in entropy. (For technical reasons the programs will also need to simulate a slightly lar...
As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t.
It is about 1/t x 1/log t x 1/log log t etc. for most values of t (taking base 2 logarithms). There are exceptions for very regular values of t.
Incidentally, I've been thinking about a similar weighting approach towards anthropic reasoning, and it seems to avoid a strong form of the Doomsday Argument (one where we bet heavily against our civilisation expanding). Imagine listing all the observers (or observer moments) in order of appea...
Thanks again for the useful response.
My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.
The best response you’ve got is I think twofold.