All of drnickbone's Comments + Replies

Thanks again for the useful response.

My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.

The best response you’ve got is I think twofold.

  1. Use SIA but please ignore the infinite case (even though the internal logic of SIA forces the infinite case) because we don’t know how to handle it. When applying SIA to large finite cases, truncate universes by a large volume cutoff (4d volume) rather than by a large popu
... (read more)

I get that this is a consistent way of asking and answering questions, but I’m not sure this is actually helpful with doing science.

If, say, universes 1 and 2 contain TREE(3) copies of me while universes 3 and 4 contain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, unless I can somehow work out which of these vast numbers is vaster. Regular scientific inference is just going to completely ignore questions as odd as this, because it simply has to. It’s going to tell me that if measurements of background radiation keep coming out

... (read more)
4Stuart_Armstrong
These are valid points, but we have wandered a bit away from the initial argument, and we're now talking about numbers that can't be compared (my money is on TREE(3) being smaller in this example, but that's irrelevant to your general point), or ways of truncating in the infinite case. But we seem to have solved the finite-and-comparable case. Now, back to the infinite case. First of all, there may be a correct decision even if probabilities cannot be computed. If we have a suitable utility function, we may decide simply not to care about what happens in universes that are of the type 5, which would rule them out completely. Or maybe the truncation can be improved slightly. For example, we could give each observer a bubble of radius 20 mega-light years, which is defined according to their own subjective experience: how many individuals do they expect to encounter within that radius, if they were made immortal and allowed to explore it fully. Then we truncate by this subjective bubble, or something similar. But yeah, in general, the infinite case is not solved.

Thanks Stuart.

The difficulty is that, by construction, there are infinitely many copies of me in each universe (if the universes are all infinite) or there are a colossally huge number of copies of me in each universe, so big that it saturates my utility bounds (assuming that my utilities are finite and bounded, because if they’re not, the decision theory leads to chaotic results anyway).

So SIA is not an approach to anthropics (or science in general) which allows us to conclude we are probably in universe 1 or 2 (rather than 3 or 4). All SIA really says is

... (read more)
4Stuart_Armstrong
If we set aside infinity, which I don't know how to deal with, then the SIA answer does not depend on utility bounds - unlike my anthropic decision theory post. Q1: "How many copies of people (currently) like me are there in each universe?" is well-defined in all finite settings, even huge ones. No, I mean not many, as compared with how many there are in universes 1 and 2. Other observers are not relevant to Q1. I'll reiterate my claim that different anthropic probability theories are "correct answers to different questions": https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions

Hi Stuart. It’s a while since I’ve posted.

Here’s one way of asking the question which does lead naturally to the Doomsday answer.

Consider two universes. They’re both infinite (or if you don’t like actual infinities, are very very large, so they both have a really huge number of civilisations).

In universe 1, almost all the civilisations die off before spreading through space, so that the average population of a civilisation through time is less than a trillion.

In universe 2, a fair proportion of the civilisations survive and grow to galaxy-size or bigger, s

... (read more)
4Stuart_Armstrong
"How many copies of people like me are there in each universe?" Then as long as your copies know that 3K has been observed, and excluding simulations and such, the answers are "(a lot, a lot, not many, not many)" in the four universes (I'm interpreting "die off before spreading through space" as "die off just before spreading through space"). This is the SIA answer, since I asked the SIA question.

I think by "logical infallibility" you really mean "rigidity of goals" i.e. the AI is built so that it always pursues a fixed set of goals, precisely as originally coded, and has no capability to revise or modify those goals. It seems pretty clear that such "rigid goals" are dangerous unless the statement of goals is exactly in accordance with the designers' intentions and values (which is unlikely to be the case).

The problem is that an AI with "flexible" goals (ones which it can revise and re-write over time) is als... (read more)

1[anonymous]
That really is not what I was saying. The argument in the paper is a couple of levels deeper than that. It is about .... well, now I have to risk rewriting the whole paper. (I have done that several times now). Rigidity per se is not the issue. It is about what happens if an AI knows that its goals are rigidly written, in such a way that when the goals are unpacked it leads the AI to execute plans whose consequences are massively inconsistent with everything the AI knows about the topic. Simple version. Suppose that a superintelligent Gardener AI has a goal to go out to the garden and pick some strawberries. Unfortunately its goal unpacking mechanism leads it to the CERTAIN conclusion that it must use a flamethrower to do this. The predicted consequence, however, is that the picked strawberries will be just smears of charcoal, when they are delivered to the kitchen. Here is the thing: the AI has background knowledge about everything in the world, including strawberries, and it also hears the protests from the people in the kitchen when he says he is going to use the flamethrower. There is massive evidence, coming from all that external information, that the plan is just wrong, regardless of how certain its planning mechanism said it was. Question is, what does the AI do about this? You are saying that it cannot change its goal mechanism, for fear that it will turn into a Terminator. Well, maybe or maybe not. There are other things it could do, though, like going into safe mode. However, suppose there is no safe mode, and suppose that the AI also knows about its own design. For that reason, it knows that this situation has come about because (a) its programming is lousy, and (b) it has been hardwired to carry out that programming REGARDLESS of all this understanding that it has, about the lousy programming and the catastrophic consequences for the strawberries. Now, my "doctrine of logical infallibility" is just a shorthand phrase to describe a superintelligent

Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place.

This was one of my problematic problems for TDT. I also discussed some Sneaky Strategies which could allow TDT, UDT or similar agents to beat the problem.

5Squark
Hi drnickbone, thx for pointing this out! I added links to your posts.

Presumably anything caused to exist by the AI (including copies, sub-agents, other AIs) would have to count as part of the power(AI) term? So this stops the AI spawning monsters which simply maximise U.

One problem is that any really valuable things (under U) are also likely to require high power. This could lead to an AI which knows how to cure cancer but won't tell anyone (because that will have a very high impact, hence a big power(AI) term). That situation is not going to be stable; the creators will find it irresistible to hack the U and get it to speak up.

2Stuart_Armstrong
I'm looking at ways round that kind of obstacle. I'll be posting them someday if they work.

I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.

I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.

Searching for features of human interest (like "le... (read more)

0Brian_Tomasik
Nice point. :) That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.

Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".

One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.

0Brian_Tomasik
Yes. :) The first paragraph here identifies at least one problem with every anthropic theory I'm aware of.

If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material ("computronium") and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).

Have I misunderstood?

4Brian_Tomasik
Yes, that's right. Note that SIA also favors sim hypotheses, but it does so less strongly because it doesn't care whether the sims are of Earth-like humans or of weirder creatures. Here's a note I wrote to myself yesterday: ---------------------------------------- Like SIA, my PSA anthropics favors the sim arg in a stronger way than normal anthropics. The sim arg works regardless of one's anthropic theory because it requires only a principle of indifference over indistinguishable experiences. But it's a trilemma, so it might be that humans go extinct or post-humans don't run early-seeming sims. Given the existence of aliens and other universes, the ordinary sim arg pushes more strongly for us being a sim because even if humans go extinct or don't run sims, whichever civilization out there runs lots of sims should have lots of sims of minds like ours, so we should be in their sims. PSA doesn't even need aliens. It directly penalizes hypotheses that predict fewer copies of us in a given region of spacetime. Say we're deciding between and H1 would have a billion-fold bigger probability penalty than H2. Even if H2 started out being millions of times less probable than H1, it would end up being hundreds of times more probable. Also note that even if we're not in a sim, then PSA, like SIA, yields Katja's doomsday argument based on the Great Filter. Either way it looks very unlikely there will be a far future, ignoring model uncertainty and unknown unknowns.

One very simple resolution: observing a white shoe (or yellow banana, or indeed anything which is not a raven) very slightly increases the probability of the hypothesis "There are no ravens left to observe: you've seen all of them". Under the assumption that all observed ravens were black, this "seen-em-all" hypothesis then clearly implies "All ravens are black". So non-ravens are very mild evidence for the universal blackness of ravens, and there is no paradox after all.

I find this resolution quite intuitive.

P.S. If I draw one supportive conclusion from this discussion, it is that long-range climate forecasts are very likely to be wrong, simply because the inputs (radiative forcings) are impossible to forecast with any degree of accuracy.

Even if we'd had perfect GCMs in 1900, forecasts for the 20th century would likely have been very wrong: no one could have predicted the relative balance of CO2, other greenhouse gases and sulfates/aerosols (e.g. no-one could have guessed the pattern of sudden sulfates growth after the 1940s, followed by levelling off after t... (read more)

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

There are certainly periods when temperatures moved in a negative direction (1940s-1970s), but then the radiative forcings over those periods (combination of natural and anthropogenic) were also negative. So climate models would also predict declining temperatures, which indeed is what they do "retrodict&... (read more)

0drnickbone
P.S. If I draw one supportive conclusion from this discussion, it is that long-range climate forecasts are very likely to be wrong, simply because the inputs (radiative forcings) are impossible to forecast with any degree of accuracy. Even if we'd had perfect GCMs in 1900, forecasts for the 20th century would likely have been very wrong: no one could have predicted the relative balance of CO2, other greenhouse gases and sulfates/aerosols (e.g. no-one could have guessed the pattern of sudden sulfates growth after the 1940s, followed by levelling off after the 1970s). And natural factors like solar cycles, volcanoes and El Niño/La Nina wouldn't have been predictable either. Similarly, changes in the 21st century could be very unexpected. Perhaps some new industrial process creates brand new pollutants with negative radiative forcing in the 2030s; but then the Amazon dies off in the 2040s, followed by a massive methane belch from the Arctic in the 2050s; then emergency geo-engineering goes into fashion in the 2070s (and out again in the 2080s); then in the 2090s there is a resurgence in coal, because the latest generation of solar panels has been discovered to be causing a weird new plague. Temperatures could be up and down like a yo-yo all century.

Thanks for a comprehensive summary - that was helpful.

It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists ... (read more)

-1VipulNaik
Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions. Co-author Green wrote a paper later claiming that the IPCC models did not do better than the no change model when tested over a broader time period: http://www.kestencgreen.com/gas-improvements.pdf But it's just a draft paper and I don't know if the author ever plans to clean it up or have it published. I would really like to see more calibrations and scorings of the models from a pure outside view approach over longer time periods. Armstrong was (perhaps wrongly) confident enough of his views that he decided to make a public bet claiming that the No Change scenario would beat out the other scenario. The bet is described at: http://www.theclimatebet.com/ Overall, I have high confidence in the view that models of climate informed by some knowledge of climate should beat the No Change model, though a lot depends on the details of how the competition is framed (Armstrong's climate bet may have been rigged in favor of No Change). That said, it's not clear how well climate models can do relative to simple time series forecasting approaches or simple (linear trend from radiative forcing + cyclic trend from ocean currents) type approaches. The number of independent out-of-sample validations does not seem to be enough and the predictive power of complex models relative to simple curve-fitting models seems to be low (probably negative). So, I think that arguments that say "our most complex, sophisticated models show X" should be treated with suspicion and should not necessarily be given more credence than arguments that rely on simple models and historical observations.
2VipulNaik
See the last sentence in my longer quote: It's not clear how much effort they put into this step, and whether e.g. they offered the Forecasting Audit Software for free to people they asked (if they were trying to sell the software, which they themselves created, that might have seemed bad). My guess is that most of the climate scientists they contacted just labeled them mentally along with the numerous "cranks" they usually have to deal with, and didn't bother engaging. I also am skeptical of some aspects of Armstrong and Green's exercise. But a first outside-view analysis that doesn't receive much useful engagement from insiders can only go so far. What would have been interesting was if, after Armstrong and Green published their analysis and it was somewhat clear that their critique would receive attention, climate scientists had offered a clearer and more direct response to the specific criticisms, and perhaps even read up more about the forecasting principles and the evidence cited for them. I don't think all climate scientists should have done so, I just think at least a few should have been interested enough to do it. Even something similar to Nate Silver's response would have been nice. And maybe that did happen -- if so, I'd like to see links. Schmidt's response, on the other hand, seems downright careless and bad. My focus here is the critique of insularity, not so much the effect it had on the factual conclusions. Basically, did climate scientists carefully consider forecasting principles (or statistical methods, or software engineering principles) then reject them? Had they never heard of the relevant principles? Did they hear about the principles, but dismiss them as unworthy of investigation? Armstrong and Green's audit may have been sloppy (though perhaps a first pass shouldn't be expected to be better than sloppy) but even if the audit itself wasn't much use, did it raise questions or general directions of inquiry worthy of investigation (or a sim

On Critique #1:

Since you are using Real Climate and Skeptical Science as sources, did you read what they had to say about the Armstrong and Green paper and about Nate Silver's chapter?

Gavin Schmidt's post was short, funny but rude; however ChrisC's comment looks much more damning if true. Is it true?

Here is Skeptical Science on Nate Silver. It seems the main cause of error in Hansen's early 1988 forecast was an assumed climate sensitivity greater than that of the more recent models and calculations (4.2 degrees rather than 3 degrees). Whereas IPCC's 1990... (read more)

0VipulNaik
In light of the portions I quoted from Armstrong and Green's paper, I'll look at Gavin Schmidt's post: The paper does cite many other sources than just the IPCC and the "hatchet job" on the Stern Report, including sources that evaluate climate models and their quality in general. ChrisC notes that the author's fail to cite the ~788 references for the IPCC Chapter 8. The authors claim to have a bibliography on their website that includes the full list of references given to them by all academics who suggested references. Unfortunately, as I noted in my earlier comment, the link to the bibliography from http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=78&Itemid=107 is broken. This doesn't reflect well on the authors (the site on the whole is a mess, with many broken links). Assuming, however, that the authors had put up the bibliography and that it was available as promised in the paper, this critique seems off the mark (though I'd have to see the bibliography to know for sure). This seems patently false given the contents of the paper as I quoted it, and the list of experts that they sought. In fact, it seems like such a major error that I have no idea how Schmidt could have made it if he'd read the paper. (Perhaps he had a more nuanced critique to offer, e.g., that the authors' survey didn't ask enough questions, or they should have tried harder, or contacted more people. But the critique as offered here smacks of incompetence or malice). [Unless Schmidt was reading an older version of the paper that didn't mention the survey at all. But I doubt that even if he was looking at an old version of the paper, it omitted all references to the survey.] First off, retrospective "predictions" of things that people already tacitly know, even though those things aren't explicitly used in tuning the models, are not that reliable. Secondly, it's possible (and likely) that Armstrong and Green missed some out-of-model tests and validations that
0VipulNaik
Here's a full list of the scientists that Armstrong and Green contacted -- the ones who sent a "useful response" are noted parenthetically. Note that of the 51 who responded, 42 were deemed as having given a useful response.
0VipulNaik
This comment was getting a bit long, so I decided to just post relevant stuff from Armstrong and Green first and then offer my own thoughts in a follow-up comment. Unfortunately, the Forecasting Principles website seems to be a mess. Their Global Warming Audit page: http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=78&Itemid=107 does link to a bibliography, but the link is broken (as is their global warming audit link, though the file is still on their website). (This is another example where experts in one field ignore best practices -- of maintaining working links to their writing -- so the insularity critique applies to forecasting experts). Continuing:

Actually, Kepler is able to determine both size and mass of planet candidates, using the method of transit photometry.

For further info, I found a non-paywalled copy of Bucchave et al's Nature paper. Figure 3 plots planet radius against star metallicity, and some of the planets are clearly of Earth-radius or smaller. I very much doubt that it is possible to form gas "giants" of Earth size, and in any case they would have a mass much lower than Earth mass, so would stand out immediately.

2Luke_A_Somers
I forgot about photometry.

It might do, except that the recent astronomical evidence is against that : solar systems with sufficient metallicity to form rocky planets were appearing within a couple of billion years after the Big Bang. See here for a review.

0Luke_A_Somers
Hmmmm. (ETA: following claim is incorrect) They're judging that the planets are rocky by measuring their mass, not by noticing that they're actually rocky. If you don't have a Jupiter-sized core out there sucking up all the gas, why would gas planets need to end up as giants? They naturally could do that - that happened with the star, after all, but it doesn't seem inevitable to me, and it might not even be common. In that case, the earth-mass planets would be gas planets after all. If you think this is a stretch, keep in mind that these are specifically in systems noted to be low metallicity. Suggesting that they might not be high in metals after all is not much of a stretch.

Hmmm... I'll have a go. One response is that the "fully general counter argument" is a true counter argument. You just used a clever rhetorical trick to stop us noticing that.

If what you are calling "efficiency" is not working for you, then you are - ahem - just not being very efficient! More revealingly, you have become fixated on the "forms" of efficiency (the metrics and tick boxes) and have lost track of the substance (adopting methods which take you closer to your true goals, rather than away from them). So you have steelmanned a criticism of formal efficiency, but not of actual efficiency.

1Stuart_Armstrong
Now we're getting somewhere :-)

Stephen McIntyre isn't a working climate scientist, but his criticism of Mann's statistical errors (which aren't necessarily relevant to the main arguments for AGW) have been acknowledged as essentially correct. I also took a reasonably detailed look at the specifics of the argument

Did you have a look at these responses? Or at Mann's book on the subject?

There are a number of points here, but the most compelling is that the statistical criticisms were simply irrelevant. Contrary to McIntyre and McKitrick's claims, the differences in principal component m... (read more)

On funding, it can be difficult to trace: see this article in Scientific American and the original paper plus the list of at least 91 climate counter-movement organisations, page 4, which have an annual income of over $900 million. A number of these organisations are known to have received funding by companies like Exxon and Koch Industries, though the recent trend appears to be more opaque funding through foundations and trusts.

On your particular sources, Climate Audit is on that list; also, from his Wikipedia bio it appears that Steve McIntyre was the fo... (read more)

I've noticed that you've listed a lot of secondary sources (books, blogs, IPCC summaries) but not primary sources (published papers by scientists in peer-reviewed journals). Is there a reason for this e.g. that you do not have access to the primary sources, or find them indigestible?

If you do need to rely on secondary sources, I'd suggest to focus on books and blogs whose authors are also producing the primary sources. Of the blogs you mention, I believe that Real Climate and Skeptical Science are largely authored by working climate scientists, whereas the... (read more)

1VipulNaik
As I mentioned in the post: I can't include a list of papers right now because the list of papers will itself be determined in real time during my inquiries, but I will link to the ones I reference at each stage of research. This needs to be unpacked. Scientists are driven by a range of motives including research prestige and ego (those who have made statements in the past want to have those statements vindicated), the desire to impress and influence peers, etc. Getting more funding is part of the status game. These incentives can distort their findings, not necessarily forever but for long enough (just like the climate system, the system of scientific discovery is not in equilibrium; there are lags). More importantly, simple cognitive and human biases can get in the way of proper analysis. One of the points I raise is that it's possible, based on Armstrong and Green's critique (I'm still investigating this) that climate scientists appear not to have consulted people in other relevant areas of expertise (specifically, forecasting and statistics). People in these areas of expertise have come up with a lot of relevant and counter-intuitive findings about how to go about this sort of tricky data analysis. Of the sources I list, which ones do you think is written or funded by people who fit this description? Judith Curry is also a working climate scientist. Stephen McIntyre isn't a working climate scientist, but his criticism of Mann's statistical errors (which aren't necessarily relevant to the main arguments for AGW) have been acknowledged as essentially correct. I also took a reasonably detailed look at the specifics of the argument, and although I can't have very high confidence, I'm inclined to believe that McIntyre was right. He seems to be sufficiently rigorous in his work and a sufficiently strong skeptic that his critiques are worth reading. Moreover, he rarely claims more confidence than is warranted: he isn't publishing his own theories of climate chan

This sort of scenario might work if Stage 1 takes a minimum of 12 billion years, so that life has to first evolve slowly in an early solar system, then hop to another solar system by panspermia, then continue to evolve for billions of years more until it reaches multicellularity and intelligence. In that case, almost all civilisations will be emerging about now (give or take a few hundred million years), and we are either the very first to emerge, or others have emerged too far away to have reached us yet. This seems contrived, but gets round the need for a late filter.

0[anonymous]
Not if evolution of multicellular organisms or complex nervous systems is a random (Poisson) process. That is to say, if the development of the first generation of multicellular life or intelligent life is a random fluke and not a gradual hill that can be optimized toward, then one should not expect behavior analagous to a progress bar. If it takes 12 billion years on average, and 12 billion years go by without such life developing, then such a result is stlil 12 billion years away.
2Luke_A_Somers
I don't get the reason panspermia needs to be involved. Simply having a minimum metallicity threshold for getting started would do the job.

This all looks clever, apart from the fact that the AI becomes completely indifferent to arbitrary changes in its value system. The way you describe it, the AI will happily and uncomplainingly accept a switch from a friendly v (such as promoting human survival, welfare and settlement of Galaxy) to an almost arbitrary w (such as making paperclips), just by pushing the right "update" buttons. An immediate worry is about who will be in charge of the update routine, and what happens if they are corrupt or make a mistake: if the AI is friendly, then i... (read more)

Or moving from conspiracy land, big budget cuts to climate research starting in 2009 might have something to do with it.

P.S. Since you started this sub-thread and are clearly still following it, are you going to retract your claims that CRU predicted "no more snow in Britain" or that Hansen predicted Manhattan would be underwater by now? Or are you just going to re-introduce those snippets in a future conversation, and hope no-one checks?

9Eugine_Nier
I was going from memory, now that I've tracked down the actual links I'd modify the claims what was actually said, i.e., snowfalls becoming exceedingly rare and the West Side Highway being underwater.

Seems like a bad proxy to me. Is snowfall really that hard a metric to find...?

Presumably not, though since I'm not making up Met Office evidence (and don't have time to do my own analysis) I can only comment on the graphs which they themselves chose to plot in 2009. Snowfall was not one of those graphs (whereas it was in 2006).

However, the graphs of mean winter temperature, maximum winter temperature, and minimum winter temperature all point to the same trend as the air frost and heating-degree-day graphs. It would be surprising if numbers of days of ... (read more)

-4Eugine_Nier
Interesting. I wonder why they're no longer plotting some trends. Maybe because it's too hard to fit them into their preferred narrative.

I'm sorry, but you are still making inaccurate claims about what CRU predicted and over what timescales.

The 20 year prediction referred specifically to heavy snow becoming unexpected and causing chaos when it happens. I see no reason at all to believe that will be false, or that it will have only a slim chance of being true.

The vague "few year" claim referred to snow becoming "rare and exciting". But arguably, that was already true in 2000 at the time of the article (which was indeed kind of the point of the article). So it's not necess... (read more)

P.S. On the more technical points, the 2009 reports do not appear to plot the number of days of snow cover or cold spells (unlike the 2006 report) so I simply referred to the closest proxies which are plotted.

The "filtering" is indeed a form of local smoothing transform (other parts of the report refer to decadal smoothing) and this would explains why the graphs stop in 2007, rather than 2009: you really need a few years either side of the plotted year to do the smoothing. I can't see any evidence that the decline in the 80s was somehow factored into the plot in the 2000s.

2gwern
Seems like a bad proxy to me. Is snowfall really that hard a metric to find...? If the window is a decade back then the '90s will still be affecting the '00s since it only goes up to 2007. I think it may depend on how exactly the smoothing was being done. If it's a smoothing like a LOESS then I'd expect the '00s raw data to be pulled up to the somewhat higher '90s data; but if the regression best-fit line is involved then I'd expect the other direction.

I'm sorry, I didn't realize 'within a few years' was so vague in English that it could easily embrace decades and I'm being tendentious in thinking that after 14 years we can safely call that prediction failed.

Got it - so the semantics of "a few years" is what you are basing the "failed prediction" claim on. Fair enough.

I have to say though that I read the "few years" part as an imprecise period relating to an imprecise qualitative prediction (that snow would become "rare and exciting"). Which as far as my family... (read more)

-3gwern
No, that's just one of the failed predictions I am pointing out, which you are weirdly carping on because it didn't come with an exact number despite it being perfectly clear in ordinary language & every context that we are well past anything that could be called 'a few years'. Maybe your family should look at those Met charts you provided about 'air frost' and note how small the decline has been in the relevant period. And '20 years' could be 200 years, because y'know, they think on such a long horizon. And maybe the 'days' in Genesis were actually billions of years and it's an accurate description of the Big Bang! So we are agreed that the 20 year prediction is going to be false just like the others and there was no point discussing how there's still a chance.

Sigh... The only dated prediction in the entire article related to 20 years, not 14 years, and the claim for 20 years was that snow would "probably" cause chaos then. Which you've just agreed is very likely to be true (based on some recent winters where some unexpected snow did cause chaos), but perhaps not that surprising (the quote did not in fact claim there would be more chaos than in the 1980s and 1990s).

All other claims had no specific dates, except to suggest generational changes (alluding to a coming generation of kids who would not have ... (read more)

1gwern
I'm sorry, I didn't realize 'within a few years' was so vague in English that it could easily embrace decades and I'm being tendentious in thinking that after 14 years we can safely call that prediction failed. So first, that's 'air frost' ("usually defined as the air temperature being below freezing point of water at a height of at least one metre above the ground"), which is not what was in question. Second, looking at 2.32, the decline 2000-2007 (when the graph ends, so fully half the period in question when warming seems to have stopped) is far from impressive. Third, what's with it being 'filtered'? some sort of linear smoothing borrowing from the steeper-looking decline 1984-2000? No, I'm fine with your chosen smoothed graphs indicating only a shallow decline at best 2000-2007. No need to look just at 2010-2014, although certainly more recent data would probably help here. That sounds like wishful thinking. In those graphs, is there any 5-year period which if repeated would abruptly vindicate the confident predictions from 2000 that snow would soon be a thing of the past in England?

What's the date?

By your reaction, and the selective down votes, I have apparently fallen asleep, it is the 2020s already, and a 20-year prediction is already falsified.

But in answer to your questions:

A) Heavy snow does indeed already cause chaos in England when it happens (just google the last few years)

B) My kids do indeed find snow a rare and exciting event (in fact there were zero days of snow here last winter, and only a few days the winter before)

C) While my kids do have a bit of firsthand knowledge of snow, it is vastly less than my own experience... (read more)

2gwern
Well, all the quotes I gave were drawn from http://www.independent.co.uk/environment/snowfalls-are-now-just-a-thing-of-the-past-724017.html which was 14 years ago. That sounds like it'd cover 'within a few years'. And as for the exact 20 year forecast of 2010, well, that's just 6 years away. Not a lot of time to catch up. Yes, looks like the usual chaos you could find in the '80s and '90s to which the predicted 'chaos' was being compared as being greater. And has your region changed much? And is your anecdote very trustworthy compared to the nation-wide changes in snowfall since 2000 (not much) when these predictions were made?

I think we have agreement that:

A) The newspaper headline "Snowfalls are now just a thing of the past" was incorrect

B) The Climatic Research Unit never actually made such a prediction

C) The only quoted statement with a timeline was for a period of 20 years, and spoke of heavy snow becoming rarer (rather than vanishing)

D) This was an extrapolation of a longer term trend, which continued into the early 2000s (using Met Office data published in 2006, of course after the Independent story)

E) It is impossible to use short periods (~10 years since 2006)... (read more)

1gwern
From the article: Does heavy snow cause chaos in England now? Is snow a 'very rare and exciting event' in England now? If we asked them, would they not know first-hand what snow is, anymore than they know first-hand what wolves are? You can't?

"Over the 2000s" is certainly too short a period to reach significant conclusions. However the longer term trends are pretty clear. See this Met Office Report from 2006.

Figure 8 shows a big drop in the length of cold spells since the 1960s. Figure 13 shows the drop in annual days of snow cover. The trend looks consistent across the country.

6gwern
I think the first question here is whether we have reached agreement on the forecasts being wrong, not what excuses should be made or conclusions drawn from said wrongness. Yes, I'm sure they were, and that those were the basis for the mistaken prediction. Your point?

Regarding the wine point, it is doubtful if wine grapes ever grew in Newfoundland, as the Norse term "Vinland" may well refer to a larger area. From the Wikipedia article:

the southernmost limit of the Norse exploration remains a subject of intense speculation. Samuel Eliot Morison (1971) suggested the southern part of Newfoundland; Erik Wahlgren (1986) Miramichi Bay in New Brunswick; and Icelandic climate specialist Pall Bergthorsson (1997) proposed New York City.[26] The insistence in all the main historical sources that grapes were found in V

... (read more)

Reading your referenced article (Independent 2000):

Heavy snow will return occasionally, says Dr Viner, but when it does we will be unprepared. "We're really going to get caught out. Snow will probably cause chaos in 20 years time," he said.

Clearly the Climatic Research Unit was not predicting no more snow in Britain by 2014.

Regarding the alleged "West Side Highway underwater" prediction, see Skeptical Science. It appears Hansen's original prediction timeframe was 40 years not 20 years, and conditional on a doubling of CO2 by then.

7gwern
Yes, but some googling suggests that average snowfall in England hasn't changed very much over the 2000s, which doesn't seem consistent with the linked article.

Note that this also messes up counterfactual accounts of knowledge as in "A is true and I believe A; but if A were not true then I would not believe A". (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)

We likely need some notion of "reliability" or "reliable processes" in an account of knowledge, like "A is true and I believe A and my belief in A arises through a reliable process". Believing things through insanity is not a reliable process.

Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.

3Jiro
The insanity example is not original to me (although I can't seem to Google it up right now). Using reliable processes isn't original, either, and if that actually worked, the Gettier Problem wouldn't be a problem.

Except that acting to prevent other AIs from being built would also encroach on human liberty, and probably in a very major way if it was to be effective! The AI might conclude from this that liberty is a lost cause in the long run, but it is still better to have a few extra years of liberty (until the next AI gets built), rather than ending it right now (through its own powerful actions).

Other provocative questions: how much is liberty really a goal in human values (when taking the CEV for humanity as a whole, not just liberal intellectuals)? How much is ... (read more)

0Stuart_Armstrong
A certain impression of freedom is valued by humans, but we don't seem to want total freedom as a terminal goal.

This also creates some interesting problems... Suppose a very powerful AI is given human liberty as a goal (or discovers that this is a goal using coherent extrapolated volition). Then it could quickly notice that its own existence is a serious threat to that goal, and promptly destroy itself!

0PhilosophyTutor
I think Asimov did this first with his Multivac stories, although rather than promptly destroy itself Multivac executed a long-term plan to phase itself out.
1Stuart_Armstrong
yes, but what about other AIs that might be created, maybe without liberty as a top goal - it would need to act to prevent them from being built! It's unlikely that "destroy itself" is the best option it can find...

One issue here is that worlds with an "almost-friendly" AI (one whose friendliness was botched in some respect) may end up looking like siren or marketing worlds.

In that case, worlds as bad as sirens will be rather too common in the search space (because AIs with botched friendliness are more likely than AIs with true friendliness) and a satisficing approach won't work.

2Stuart_Armstrong
Interesting thought there...

Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else's position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn't perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the "revealed preference" approach to utility).

Ken Binmore has a rather good paper on this topic, see here.

OK, I also got a "non-cheat" solution: unfortunately, it is non-constructive and uses the Nkvbz bs Pubvpr, so it still feels like a bit of a cheat. Is there a solution which doesn't rely on that (or is it possible to show there is no solution in such a case?)

Oh dear, I suppose that rules out other "cheats" then: such as prisoner n guessing after n seconds. At any point in time, only finitely many have guessed, so only finitely many have guessed wrong. Hence the prisoners can never be executed. (Though they can never be released either.)

I suspect an April Fool:

Cevfbare a+1 gnxrf gur ung sebz cevfbare a naq chgf vg ba uvf bja urnq. Gura nyy cevfbaref (ncneg sebz cevfbare 1) thrff gur pbybe pbeerpgyl!

1TsviBT
No April Fool here.

As one example, imagine a long chain of possible people whose experiences and memories are indistinguishable from immediate neighbours in the chain (and they are counterparts of their neighbours). But there is a cumulative "drift" along the chain, so that the ends are very different from each other (and not counterparts).

UDT doesn't seem to work this way. In UDT, "you" are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in d

... (read more)
0Squark
Actually I was speaking of a different problem, namely the philosophical problem of which abstract algorithms should be regarded as conscious (assuming the concept makes sense at all). The identification of oneself's algorithm is an introspective operation whose definition is not obvious for humans. For AIs the situation is clearer if we assume the AI has access to its own source code.

It is not the case if the money can be utilized in a manner with long term impact.

OK, I was using $ here as a proxy for utils, but technically you're right: the bet should be expressed in utils (as for the general definition of a chance that I gave in my comment). Or if you don't know how to bet in utils, use another proxy which is a consumptive good and can't be invested (e.g. chocolate bars or vouchers for a cinema trip this week). A final loop-hole is the time discounting: the real versions of you mostly live earlier than the sim versions of you, so ... (read more)

0Squark
It wouldn't be exactly twice but you're more or less right. However, it has no direct relation to probability. To see this, imagine you're a paperclip maximizer. In this case you don't care about torture or anything of the sort: you only care about paperclips. So your utility function specifies a way of counting paperclips but no way of counting copies of you. From another angle, imagine your two simulations are offered a bet. How should they count themselves? Obviously it depends on the rules of the bet: whether the payoff is handed out once or twice. Therefore, the counting is ambiguous. What you're trying to do is writing the utility function as a convex linear combination of utility functions associated with different copies of you. Once you accomplish that, the coefficients of the combination can be interpreted as probabilities. However, there is no such canonical decomposition.

So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim?

It depends on the stakes of the best.

I thought we discussed an example earlier in the thread? The gambler pays $1000 if not in a simulation; the bookmaker pays $1 if the gambler is in a simulation. In terms of expected utility, it is better for "you" (that is, all linked instances of you) to take the gamble, even if the vast majority of light-cones don... (read more)

2Squark
It is not the case if the money can be utilized in a manner with long term impact. This doesn't give an unambiguous recipe to compute probabilities since it depends on how the results of the bets are accumulated to influence utility. An unambiguous recipe cannot exist since it would have to give precise answers to ambiguous questions such as: if there are two identical simulations of you running on two computers, should they be counted as two copies or one? UDT doesn't seem to work this way. In UDT, "you" are not a physical entity but an abstract decision algorithm. This abstract decision algorithm is correlated to different extent with different physical entities in different worlds. This leads to the question of whether some algorithms are more "conscious" than others. I don't think UDT currently has an answer for this, but neither do other frameworks. If we think of knowledge as a layered pie, with lower layers corresponding to knowledge which is more "fundamental", then somewhere near the bottom we have paradigms of reasoning such as Occam's razor / Solomonoff induction and UDT. Below them lie "human reasoning axioms" which are something we cannot formalize due to our limited introspection ability. In fact the paradigms of reasoning are our current best efforts at formalizing this intuition. However, when we build an AI we need to use something formal, we cannot just transfer our reasoning axioms to it (at least I don't know how to do it; meseems every way to do it would be "ingenuine" since it would be based on a formalism). So, for the AI, UDT (or whatever formalism we use) is the lowest layer. Maybe it's a philosophical limitation of any AGI, but I doubt it can be overcome and I doubt it's a good reason not to build an (F)AI.

I don't think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization.

So: if a bet is offered that you are a sim (in some form of computronium) and it becomes possible to test that (and so decide the bet one way or another), you would bet heavily on being a sim? But on the off-chance that you are not a sim, you're going to make decisions as if you were in the real world, because those decisions (when suitably generalized across a... (read more)

2Squark
It depends on the stakes of the best. It's not an "off-chance". It is meaningless to speak of the "chance I am a sim": some copies of me are sims, some copies of me are not sims. It surely can: just give more weight to humans of a very particular type ("you"). Subjective expectations are meaningless in UDT. So there is no "what we should expect to see". Does it have to stay dogmatically committed to Occam's razor in the face of whatever it sees? If not, how would it arrive at a replacement without using Occam's razor? There must be some axioms at the basis of any reasoning system.

No, it can be located absolutely anywhere. However you're right that the light cones with vertex close to Big Bang will probably have large weight to low K-complexity.

Ah, I see what you're getting at. If the vertex is at the Big Bang, then the shortest programs basically simulate a history of the observable universe. Just start from a description of the laws of physics and some (low entropy) initial conditions, then read in random bits whenever there is an increase in entropy. (For technical reasons the programs will also need to simulate a slightly lar... (read more)

0Squark
In some sense it does, but we must be wary of technicalities. In initial singularity models I'm not sure it makes sense to speak of "light cone with vertex in singularity" and it certainly doesn't make sense to speak of a privileged point in space. In eternal inflation models there is no singularity so it might make space to speak of the "Big Bang" point in space-time, however it is slightly "fuzzy". I don't think it does. If we are not in a sim, our actions have potentially huge impact since they can affect the probability and the properties of a hypothetical expanded post-human civilization. In UDT it doesn't make sense to speak of what "actually exists". Everything exists, you just assign different weights to different parts of "everything" when computing utility. The "U" in UDT is for "updateless" which means that you don't update on being in a certain branch of the wavefunction to conclude other branches "don't exist", otherwise you lose in counterfactual mugging.

As a result, the effective discount falls off as 2^{-Kolmogorov complexity of t} which is only slightly faster than 1/t.

It is about 1/t x 1/log t x 1/log log t etc. for most values of t (taking base 2 logarithms). There are exceptions for very regular values of t.

Incidentally, I've been thinking about a similar weighting approach towards anthropic reasoning, and it seems to avoid a strong form of the Doomsday Argument (one where we bet heavily against our civilisation expanding). Imagine listing all the observers (or observer moments) in order of appea... (read more)

Load More