Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: johnlawrenceaspden 07 April 2014 03:02:43PM 3 points [-]

Yes, but also remember that Turing's English, shy, and from King's College, home of a certain archness and dry wit. I think he's taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He's facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there's still a profound feeling that he's defying the data. Or maybe not. Maybe I just read it that way because I don't buy telepathy.

Comment author: wuncidunci 08 April 2014 01:50:24PM *  8 points [-]

Hodges claims that Turing at least had some interest in telepathy and prophesies:

These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be the first to go.

Readers might well have wondered whether he[Turing] really believed the evidence to be ‘overwhelming’, or whether this was a rather arch joke. In fact he was certainly impressed at the time by J .B. Rhine’s claims to have experimental proof of extra-sensory perception. It might have reflected his interest in dreams and prophecies and coincidences, but certainly was a case where for him, open-mindedness had to come before anything else; what was so had to come before what it was convenient to think. On the other hand, he could not make light, as less well-informed people could, of the inconsistency of these ideas with the principles of causality embodied in the existing ‘laws of physics’, and so well attested by experiment.

Alan Turing: The Enigma (Chapter 7)

Comment author: Tyrrell_McAllister 02 April 2014 05:53:47PM *  46 points [-]

The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:

[Following the discovery of some errors in his earlier work:] I think it was at this moment that I largely stopped doing what is called “curiosity driven research” and started to think seriously about the future.

[...]

A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.

[...]

It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.

[...]

Among mathematicians computer proof verification was almost a forbidden subject. A conversation started about the need for computer proof assistants would invariably drift to the Goedel Incompleteness Theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was.

[...]

I now do my mathematics with a proof assistant and do not have to worry all the time about mistakes in my arguments or about how to convince others that my arguments are correct.

From a March 26, 2014 talk. Slides available here.

Comment author: wuncidunci 03 April 2014 08:26:53AM 6 points [-]

A video of the whole talk is available here.

Comment author: Vaniver 22 January 2014 10:24:51PM *  4 points [-]

And whence the blasphemy?

1265 people are in group A. 947 are in group B, which is completely contained in A. Of all the people in group A, 450 satisfy property C, whereas this is true for 602 people in group B, all of whom are also in group A. 602 is larger than 450, so something has gone wrong.

Comment author: wuncidunci 23 January 2014 07:59:43AM 1 point [-]

Ahh, thank you.

In response to 2013 Survey Results
Comment author: AlexMennen 19 January 2014 06:51:55PM *  9 points [-]

On average, effective altruists (n = 412) donated $2503 to charity, and other people (n = 853) donated $523 - obviously a significant result.

There could be some measurement bias here. I was on the fence about whether I should identify myself as an effective altruist, but I had just been reminded of the fact that I hadn't donated any money to charity in the last year, and decided that I probably shouldn't be identifying as an effective altruist myself despite having philosophical agreements with the movement.

1265 people told us how much they give to charity; of those, 450 gave nothing. ... In order to calculate percent donated I divided charity donations by income in the 947 people helpful enough to give me both numbers. Of those 947, 602 donated nothing to charity, and so had a percent donated of 0.

This is blasphemy against Saint Boole.

Comment author: wuncidunci 22 January 2014 10:20:52PM 0 points [-]

Did you mean Saint Boole?

And whence the blasphemy?

Comment author: itaibn0 11 January 2014 01:26:54PM *  0 points [-]

It's worth mentioning that anyone with a strong argument against cryonics is likely to believe that you will be persuaded by it (due to low base-rates for these kinds of conversions). Thus the financial incentive is not as influential as you would like it to be.

Added: Relevant prediction

Comment author: wuncidunci 11 January 2014 01:32:00PM 2 points [-]

If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

Comment author: Pfft 11 January 2014 01:07:08AM 4 points [-]

Also observe that 1900s ≠ 19th century, so they weren't that silly.

I guess gwern meant the construction was planned to take place in the 1900s.

Comment author: wuncidunci 11 January 2014 12:18:35PM *  3 points [-]

Quite possible. I didn't intend for that sentence to come across in a hostile way.

Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern's comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)

Comment author: wuncidunci 10 January 2014 08:36:49PM 1 point [-]

If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn't the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.

Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equations on a computer. Lots of small problems to solve, but no major conceptual ones. We could run into problems related to measuring quantum systems when doing the scanning (I believe Scott Aaronson wrote something about this suspicion lately). Note that this also puts a bound on the level of nano-technology we could have achieve, if we have neuron-sized scanning robots, we would be able to scan a brain and start the Hansonian scenario. Note that this does not preclude slightly larger scale manufacturing technologies, which would probably come from successive miniaturisations of 3d-printers.

Conceptual difficulties creating AGI are more or less expected by everyone around here, but in the case AGI is delayed by over a century we should get quite worried about other existential risks on our way there. Major contenders are global conflict and terrorism, especially involving nuclear, nano-technological or biological weapons. Even if nano-technology will not reach the level described in Sci-Fi, the bounds given above still allow for sufficient development to make advanced weapons be a question of blueprints and materials. Low probability huge impact risks from global warming are also worth mentioning, if only to note that there are a lot of other people working on them.

What does this tell us about analysing long-term risks like the slithy toves? Well I don't know anything about slithy toves, but let's look at the eugenics stuff discussed earlier and consider how it would influence the probability of major global conflicts, the question is not whether it would increase the risk of global conflict, but how much it would increase the risk of global conflict. On the other hand if AI-safety is already taken care of, it becomes a priority to develop AGI as soon as humanly possible. And then it would be really good if humanly possible was a sigma or so better than today. Still it wouldn't be great, since most of the risks we would be facing at this point would be quite small for each year (as it seems today we could of course get other info on our way there). It's really quite hard to say what would be the proper balance between more intelligent people and more time available at this point, we could say that if we've already had a century to solve the problem more time can't be that useful, on the other hand we could say that if we still haven't solved the problem in a century there are loads of sequential steps to get right we need all the time we can buy.

tldr: No AGI & No Uploads => most X-risk from different types of conflict => eugenics or any kind of superhumans increases X-risk due to risk of war between enhanced and old-school humans

Comment author: gwern 10 January 2014 04:51:50PM 23 points [-]

I'm amused that when I was reading this, it didn't even occur to me that this might be about global warming - I just assumed it was about eugenics.

But fundamentally, I do think that the basic observation is right: our planning horizons should be fairly short, because we just don't know enough about future technology and developments to spend large amounts of resource on things with low option value. There are countless past crises that did not materialize or were averted by other developments; to give an imperfect list off the top of my head: horse shit in the streets of cities, the looming ice age, the degradation of the environment with industrialization, Kessler catastrophe, Y2K, and Hannu Kari's Internet apocalypse.

I am reminded of a story Kelly tells in The Clock of the Long Now about a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve; this is a cute story of how their grossly mistaken forecasts had an unanticipated benefit, but being mistaken is not usually a good way of going about life, and the story would be a lot less cute if the action had involved something more serious like taxation or military drafts or criminal justice or economy-wide regulation.

Comment author: wuncidunci 10 January 2014 07:42:53PM *  14 points [-]

a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;

This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren't used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this issue, it seems some of the members already doubted whether oak would remain a good material to build ships from for so long.

Also observe that 1900s ≠ 19th century, so they weren't that silly.

Had some trouble finding English references for this, but this (p 4) gives some history and numbers are available in Swedish Wikipedia.

Comment author: wuncidunci 08 January 2014 05:27:03PM *  11 points [-]

and the dark arts that I use to maintain productivity.

Yes! Please tell us more about these!

Comment author: badtheatre 17 December 2013 07:47:57PM 0 points [-]

I don't see the relevance of either of these links.

Comment author: wuncidunci 17 December 2013 11:26:25PM 0 points [-]

Two points of relevance that I see are:

If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.

If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.

View more: Next