Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: drnickbone 18 March 2015 08:45:19AM 4 points [-]

Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place.

This was one of my problematic problems for TDT. I also discussed some Sneaky Strategies which could allow TDT, UDT or similar agents to beat the problem.

Comment author: agilecaveman 11 March 2015 04:59:20AM 8 points [-]

Maybe this have been said before, but here is a simple idea:

Directly specify a utility function U which you are not sure about, but also discount AI's own power as part of it. So the new utility function is U - power(AI), where power is a fast growing function of a mix of AI's source code complexity, intelligence, hardware, electricity costs. One needs to be careful of how to define "self" in this case, as a careful redefinition by the AI will remove the controls.

One also needs to consider the creation of subagents with proper utilities as well, since in a naive implementation, sub-agents will just optimize U, without restrictions.

This is likely not enough, but has the advantage that the AI does not have a will to become stronger a priori, which is better than boxing an AI which does.

Comment author: drnickbone 14 March 2015 09:04:27PM 1 point [-]

Presumably anything caused to exist by the AI (including copies, sub-agents, other AIs) would have to count as part of the power(AI) term? So this stops the AI spawning monsters which simply maximise U.

One problem is that any really valuable things (under U) are also likely to require high power. This could lead to an AI which knows how to cure cancer but won't tell anyone (because that will have a very high impact, hence a big power(AI) term). That situation is not going to be stable; the creators will find it irresistible to hack the U and get it to speak up.

Comment author: Brian_Tomasik 22 November 2014 11:28:01AM 0 points [-]

One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions.

Yes. :) The first paragraph here identifies at least one problem with every anthropic theory I'm aware of.

Comment author: drnickbone 14 March 2015 08:24:49PM 0 points [-]

I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.

I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.

Searching for features of human interest (like "leader of a nation") is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such "observers" encountered while stepping through u, and simply picks out the nth entry on the list, giving the "nth" observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.

Comment author: Brian_Tomasik 15 November 2014 06:20:41AM *  2 points [-]

Yes, that's right. Note that SIA also favors sim hypotheses, but it does so less strongly because it doesn't care whether the sims are of Earth-like humans or of weirder creatures.

Here's a note I wrote to myself yesterday:


Like SIA, my PSA anthropics favors the sim arg in a stronger way than normal anthropics.

The sim arg works regardless of one's anthropic theory because it requires only a principle of indifference over indistinguishable experiences. But it's a trilemma, so it might be that humans go extinct or post-humans don't run early-seeming sims.

Given the existence of aliens and other universes, the ordinary sim arg pushes more strongly for us being a sim because even if humans go extinct or don't run sims, whichever civilization out there runs lots of sims should have lots of sims of minds like ours, so we should be in their sims.

PSA doesn't even need aliens. It directly penalizes hypotheses that predict fewer copies of us in a given region of spacetime. Say we're deciding between

H1: no sims of us

and

H2: 1 billion sims of us.

H1 would have a billion-fold bigger probability penalty than H2. Even if H2 started out being millions of times less probable than H1, it would end up being hundreds of times more probable.

Also note that even if we're not in a sim, then PSA, like SIA, yields Katja's doomsday argument based on the Great Filter.

Either way it looks very unlikely there will be a far future, ignoring model uncertainty and unknown unknowns.

Comment author: drnickbone 21 November 2014 08:52:01PM 1 point [-]

Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".

One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.

Comment author: drnickbone 14 November 2014 09:06:38PM 19 points [-]

Taken survey.

Comment author: drnickbone 14 November 2014 08:26:21PM 2 points [-]

If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material ("computronium") and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).

Have I misunderstood?

Comment author: drnickbone 12 August 2014 10:37:09PM *  1 point [-]

One very simple resolution: observing a white shoe (or yellow banana, or indeed anything which is not a raven) very slightly increases the probability of the hypothesis "There are no ravens left to observe: you've seen all of them". Under the assumption that all observed ravens were black, this "seen-em-all" hypothesis then clearly implies "All ravens are black". So non-ravens are very mild evidence for the universal blackness of ravens, and there is no paradox after all.

I find this resolution quite intuitive.

Comment author: drnickbone 09 July 2014 08:42:55PM *  1 point [-]

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

There are certainly periods when temperatures moved in a negative direction (1940s-1970s), but then the radiative forcings over those periods (combination of natural and anthropogenic) were also negative. So climate models would also predict declining temperatures, which indeed is what they do "retrodict". A no-change model would be wrong for those periods as well.

Your most substantive point is that the complex models don't seem to be much more accurate than a simple forcing model (e.g. calculate net forcings from solar and various pollutant types, multiply by best estimate of climate sensitivity, and add a bit of lag since the system takes time to reach equilibrium; set sensitivity and lags empirically). I think that's true on the "broadest brush" level, but not for regional and temporal details e.g. warming at different latitudes, different seasons, land versus sea, northern versus southern hemisphere, day versus night, changes in maximum versus minimum temperatures, changes in temperature at different levels of the atmosphere etc. It's hard to get those details right without a good physical model of the climate system and associated general circulation model (which is where the complexity arises). My understanding is that the GCMs do largely get these things right, and make predictions in line with observations; much better than simple trend-fitting.

Comment author: drnickbone 09 July 2014 09:39:50PM *  0 points [-]

P.S. If I draw one supportive conclusion from this discussion, it is that long-range climate forecasts are very likely to be wrong, simply because the inputs (radiative forcings) are impossible to forecast with any degree of accuracy.

Even if we'd had perfect GCMs in 1900, forecasts for the 20th century would likely have been very wrong: no one could have predicted the relative balance of CO2, other greenhouse gases and sulfates/aerosols (e.g. no-one could have guessed the pattern of sudden sulfates growth after the 1940s, followed by levelling off after the 1970s). And natural factors like solar cycles, volcanoes and El Niño/La Nina wouldn't have been predictable either.

Similarly, changes in the 21st century could be very unexpected. Perhaps some new industrial process creates brand new pollutants with negative radiative forcing in the 2030s; but then the Amazon dies off in the 2040s, followed by a massive methane belch from the Arctic in the 2050s; then emergency geo-engineering goes into fashion in the 2070s (and out again in the 2080s); then in the 2090s there is a resurgence in coal, because the latest generation of solar panels has been discovered to be causing a weird new plague. Temperatures could be up and down like a yo-yo all century.

Comment author: VipulNaik 09 July 2014 08:24:56PM 0 points [-]

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

Co-author Green wrote a paper later claiming that the IPCC models did not do better than the no change model when tested over a broader time period:

http://www.kestencgreen.com/gas-improvements.pdf

But it's just a draft paper and I don't know if the author ever plans to clean it up or have it published.

I would really like to see more calibrations and scorings of the models from a pure outside view approach over longer time periods.

Armstrong was (perhaps wrongly) confident enough of his views that he decided to make a public bet claiming that the No Change scenario would beat out the other scenario. The bet is described at:

http://www.theclimatebet.com/

Overall, I have high confidence in the view that models of climate informed by some knowledge of climate should beat the No Change model, though a lot depends on the details of how the competition is framed (Armstrong's climate bet may have been rigged in favor of No Change). That said, it's not clear how well climate models can do relative to simple time series forecasting approaches or simple (linear trend from radiative forcing + cyclic trend from ocean currents) type approaches. The number of independent out-of-sample validations does not seem to be enough and the predictive power of complex models relative to simple curve-fitting models seems to be low (probably negative). So, I think that arguments that say "our most complex, sophisticated models show X" should be treated with suspicion and should not necessarily be given more credence than arguments that rely on simple models and historical observations.

Comment author: drnickbone 09 July 2014 08:42:55PM *  1 point [-]

Actually, it's somewhat unclear whether the IPCC scenarios did better than a "no change" model -- it is certainly true over the short time period, but perhaps not over a longer time period where temperatures had moved in other directions.

There are certainly periods when temperatures moved in a negative direction (1940s-1970s), but then the radiative forcings over those periods (combination of natural and anthropogenic) were also negative. So climate models would also predict declining temperatures, which indeed is what they do "retrodict". A no-change model would be wrong for those periods as well.

Your most substantive point is that the complex models don't seem to be much more accurate than a simple forcing model (e.g. calculate net forcings from solar and various pollutant types, multiply by best estimate of climate sensitivity, and add a bit of lag since the system takes time to reach equilibrium; set sensitivity and lags empirically). I think that's true on the "broadest brush" level, but not for regional and temporal details e.g. warming at different latitudes, different seasons, land versus sea, northern versus southern hemisphere, day versus night, changes in maximum versus minimum temperatures, changes in temperature at different levels of the atmosphere etc. It's hard to get those details right without a good physical model of the climate system and associated general circulation model (which is where the complexity arises). My understanding is that the GCMs do largely get these things right, and make predictions in line with observations; much better than simple trend-fitting.

Comment author: VipulNaik 09 July 2014 06:58:37PM *  1 point [-]

In light of the portions I quoted from Armstrong and Green's paper, I'll look at Gavin Schmidt's post:

Principle 1: When moving into a new field, don’t assume you know everything about it because you read a review and none of the primary literature.

Score: -2 G+A appear to have only read one chapter of the IPCC report (Chap 8), and an un-peer reviewed hatchet job on the Stern report. Not a very good start…

The paper does cite many other sources than just the IPCC and the "hatchet job" on the Stern Report, including sources that evaluate climate models and their quality in general. ChrisC notes that the author's fail to cite the ~788 references for the IPCC Chapter 8. The authors claim to have a bibliography on their website that includes the full list of references given to them by all academics who suggested references. Unfortunately, as I noted in my earlier comment, the link to the bibliography from http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=78&Itemid=107 is broken. This doesn't reflect well on the authors (the site on the whole is a mess, with many broken links). Assuming, however, that the authors had put up the bibliography and that it was available as promised in the paper, this critique seems off the mark (though I'd have to see the bibliography to know for sure).

Principle 2: Talk to people who are doing what you are concerned about.

Score: -2 Of the roughly 20 climate modelling groups in the world, and hundreds of associated researchers, G+A appear to have talked to none of them. Strike 2.

This seems patently false given the contents of the paper as I quoted it, and the list of experts that they sought. In fact, it seems like such a major error that I have no idea how Schmidt could have made it if he'd read the paper. (Perhaps he had a more nuanced critique to offer, e.g., that the authors' survey didn't ask enough questions, or they should have tried harder, or contacted more people. But the critique as offered here smacks of incompetence or malice). [Unless Schmidt was reading an older version of the paper that didn't mention the survey at all. But I doubt that even if he was looking at an old version of the paper, it omitted all references to the survey.]

Principle 3: Be humble. If something initially doesn’t make sense, it is more likely that you’ve mis-understood than the entire field is wrong.

Score: -2 For instance, G+A appear to think that climate models are not tested on ‘out of sample’ data (they gave that a ‘-2′). On the contrary, the models are used for many situations that they were not tuned for, paleo-climate changes (mid Holocene, last glacial maximum, 8.2 kyr event) being a good example. Similarly, model projections for the future have been matched with actual data – for instance, forecasting the effects of Pinatubo ahead of time, or Hansen’s early projections. The amount of ‘out of sample’ testing is actually huge, but the confusion stems from G+A not being aware of what the ‘sample’ data actually consists of (mainly present day climatology). Another example is that G+A appear to think that GCMs use the history of temperature changes to make their projections since they suggest leaving some of it out as a validation. But this is just not so, as we discussed more thoroughly in a recent thread.

First off, retrospective "predictions" of things that people already tacitly know, even though those things aren't explicitly used in tuning the models, are not that reliable.

Secondly, it's possible (and likely) that Armstrong and Green missed some out-of-model tests and validations that had been performed in the climate science arena. While part of this can be laid at their feet, part of it also reflects poor documentation by climate scientists of exactly how they were going about their testing. I did read that IPCC AR4 chapter that Armstrong and Green did, and I found it quite unclear on the forecasting side of things (compared to other papers I've read that judge forecast skill, in weather and short-term climate forecasting, macroeconomic forecasting, and business forecasting). This is similar to the sloppy code problem.

Thirdly, the climate scentists whom Armstrong and Green attempted to engage could have been more engaging (not Gavin Schmidt's fault; he wasn't included in the list, and the response rate appears to have been low from mainstream scientists as well as skeptics, so it's not just a problem of the climate science mainstream).

Overall, I'd like to know more details of the survey responses and Armstrong and Green's methodology, and it would be good if they combined their proclaimed commitment to openness with actually having working links on their websites. But Schmidt's critique doesn't reflect too well on him, even if Armstrong and Green were wrong.

Now, to ChrisC's comment:

Call me crazy, but in my field of meteorology, we would never head to popular literature, much less the figgin internet, in order to evaluate the state of the art in science. You head to the scientific literature first and foremost. Since meteorology and climatology are not that different, I would struggle to see why it would be any different.

The authors also seem to put a large weight on “forecasting principles” developed in different fields. While there may be some valuable advice, and cross-field cooperation is to be encouraged, one should not assume that techniques developed in say, econometrics, port directly into climate science.

The authors also make much of a wild goose chase on google for sites matching their specific phrases, such as “global warming” AND “forecast principles”. I’m not sure what a lack of web sites would prove. They also seem to have skiped most of the literature cited in AR4 ch. 8 on model validation and climatology predictions.

Part of the authors' criticism was that the climate science mainstream hadn't paid enough attention to forecasting, or to formal evaluations of forecasting. So it's natural that they didn't find enough mainstream stuff to cite that was directly relevant to the questions at hand for them.

As for the Google search and Google Scholar search, these are standard tools for initiating an inquiry. I know, I've done it, and so has everybody else. It would be damning if the authors had relied only on such searches. But they surveyed climate scientists and worked their way through the IPCC Working Group Report. This may have been far short of full due diligence, but it isn't anywhere near as sloppy as Gavin Schmidt and ChrisC make it sound.

Comment author: drnickbone 09 July 2014 07:27:24PM *  1 point [-]

Thanks for a comprehensive summary - that was helpful.

It seems that A&G contacted the working scientists to identify papers which (in the scientists' view) contained the most credible climate forecasts. Not many responded, but 30 referred to the recent (at the time) IPCC WP1 report, which in turn referenced and attempted to summarize over 700 primary papers. There also appear to have been a bunch of other papers cited by the surveyed scientists, but the site has lost them. So we're somewhat at a loss to decide which primary sources climate scientists find most credible/authoritative. (Which is a pity, because those would be worth rating, surely?)

However, A&G did their rating/scoring on the IPCC WP1, Chapter 8. But they didn't contact the climate scientists to help with this rating (or they did, but none of them answered?) They didn't attempt to dig into the 700 or so underlying primary papers, identify which of them contained climate forecasts, and/or had been identified by the scientists as containing the most credible forecasts and then rate those. Or even pick a random sample, and rate those? All that does sound just a tad superficial.

What I find really bizarre is their site's conclusion that because IPCC got a low score by their preferred rating principles, then a "no change" forecast is superior, and more credible! That's really strange, since "no change" has historically done much worse as a predictor than any of the IPCC models.

View more: Next