wuncidunci

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Hodges claims that Turing at least had some interest in telepathy and prophesies:

These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be the first to go.

Readers might well have wondered whether he[Turing] really believed the evidence to be ‘overwhelming’, or whether this was a rather arch joke. In fact he was certainly impressed at the time by J .B. Rhine’s claims to have experimental proof of extra-sensory perception. It might have reflected his interest in dreams and prophecies and coincidences, but certainly was a case where for him, open-mindedness had to come before anything else; what was so had to come before what it was convenient to think. On the other hand, he could not make light, as less well-informed people could, of the inconsistency of these ideas with the principles of causality embodied in the existing ‘laws of physics’, and so well attested by experiment.

Alan Turing: The Enigma (Chapter 7)

A video of the whole talk is available here.

Did you mean Saint Boole?

And whence the blasphemy?

If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

Quite possible. I didn't intend for that sentence to come across in a hostile way.

Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern's comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)

If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn't the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.

Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equations on a computer. Lots of small problems to solve, but no major conceptual ones. We could run into problems related to measuring quantum systems when doing the scanning (I believe Scott Aaronson wrote something about this suspicion lately). Note that this also puts a bound on the level of nano-technology we could have achieve, if we have neuron-sized scanning robots, we would be able to scan a brain and start the Hansonian scenario. Note that this does not preclude slightly larger scale manufacturing technologies, which would probably come from successive miniaturisations of 3d-printers.

Conceptual difficulties creating AGI are more or less expected by everyone around here, but in the case AGI is delayed by over a century we should get quite worried about other existential risks on our way there. Major contenders are global conflict and terrorism, especially involving nuclear, nano-technological or biological weapons. Even if nano-technology will not reach the level described in Sci-Fi, the bounds given above still allow for sufficient development to make advanced weapons be a question of blueprints and materials. Low probability huge impact risks from global warming are also worth mentioning, if only to note that there are a lot of other people working on them.

What does this tell us about analysing long-term risks like the slithy toves? Well I don't know anything about slithy toves, but let's look at the eugenics stuff discussed earlier and consider how it would influence the probability of major global conflicts, the question is not whether it would increase the risk of global conflict, but how much it would increase the risk of global conflict. On the other hand if AI-safety is already taken care of, it becomes a priority to develop AGI as soon as humanly possible. And then it would be really good if humanly possible was a sigma or so better than today. Still it wouldn't be great, since most of the risks we would be facing at this point would be quite small for each year (as it seems today we could of course get other info on our way there). It's really quite hard to say what would be the proper balance between more intelligent people and more time available at this point, we could say that if we've already had a century to solve the problem more time can't be that useful, on the other hand we could say that if we still haven't solved the problem in a century there are loads of sequential steps to get right we need all the time we can buy.

tldr: No AGI & No Uploads => most X-risk from different types of conflict => eugenics or any kind of superhumans increases X-risk due to risk of war between enhanced and old-school humans

a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;

This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren't used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this issue, it seems some of the members already doubted whether oak would remain a good material to build ships from for so long.

Also observe that 1900s ≠ 19th century, so they weren't that silly.

Had some trouble finding English references for this, but this (p 4) gives some history and numbers are available in Swedish Wikipedia.

and the dark arts that I use to maintain productivity.

Yes! Please tell us more about these!

Two points of relevance that I see are:

If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.

If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.

Load More