All of Paul_Crowley2's Comments + Replies

But we want them to be sentient. These things are going to be our cultural successors. We want to be able to enjoy their company. We don't want to pass the torch on to something that isn't sentient. If we were to build a nonsentient one, assuming such a thing is even possible, one of the first things it would do would be start working on its sentient successor.

In any case, it seems weird to try and imagine such a thing. We are sentient entirely as a result of being powerful optimisers. We would not want to build an AI we couldn't talk to, and if it c... (read more)

"by the time the AI is smart enough to do that, it will be smart enough not to"

I still don't quite grasp why this isn't an adequate answer. If an FAI shares our CEV, it won't want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don't see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.

I'm also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.

Posting much too late, but about the Emperor's New Clothes: I had always interpreted the story to mean not that people stopped believing in the clothes when the little boy spoke out, but that it hadn't quite crossed people's minds until that moment that whether or not the King was clothed, he appeared naked to that boy, and that wasn't how things should be. Everyone laughs because they think, I too can see the nakedness of the King; only then do they realise that their neighbour can also see it, and only after that do they see that there are no clothes to see.

Very recently experienced exactly this phenomenon: someone discussing atheists who think "all religion/religious stuff is bad" to the inclusion of, for example, the music of Bach, or drinking and celebrating at Christmas. They seemed convinced that such atheists exist, and I doubt it, or at least I have never heard of them or met them, and I know for a fact that for example all four horsemen of atheism have made explicit statements to the contrary.

Your disclaimer is an annoying one to have to make, and of course this problem comes up whenever th... (read more)

I can't believe the discussion has got this far and no-one has mentioned The Land of Infinite Fun.

There is room to do vastly better than what is usually used for community content finding, and it's a great mystery to me how little explored this area is. If things have moved forward significantly since Raph Levien's work on attack resistant trust metrics, I haven't heard about it.

Good software to support rational discussion would be a huge contribution to thought.

There's a whole world of atheist blogging and writing out there that might also be worth tapping into for advice from others who've been there. See this collection of deconversion stories for example.

That sounds like a really tough spot. I hope you find advice that can help.

Possession of a single Eye is said to make the bearer equivalent to royalty.

I approve.

2Wes_W
I read this months ago, but only yesterday finally got the reference.

Crossman: there's a third argument, which is that even if the consequences of keeping the secret are overall worse than those of betraying the confidence even after the effect you discuss, turning yourself into someone who will never betray these secrets no matter what the consequences and advertising yourself as such in an impossible-to-fake way may overall have good consequences. In other words, you might turn away from consequentialism on consequentialist grounds.

Another example where unfakeably advertising irrationality can (at least in theory) serve... (read more)

  • There's a huge conspiracy covering it up

  • Well, that's just what one of the Bad Guys would say, isn't it?

  • Why should I have to justify myself to you?

  • Oh, you with your book-learning, you think you're smarter than me?

  • They said that to Einstein and Galileo!

  • That's a very interesting question, let me show you the entire library that's been written about it (where if there were a satisfactory answer it would be shortish)

  • How can you be so sure?

Don't think "silver spoons", think "clean drinking water".

I like "we are the cards we are dealt", which expresses nicely a problem with common ideas of blame and credit. I disagree that intelligence is the unfairest card of all - I think that a relatively dim person born into affluence in the USA has a much better time of it than a smart person born into poverty in the Congo.

Notice the number of cards you had to change to balance the intelligence card.

Interesting. There's a paradox involving a game in which players successively take a single coin from a large pile of coins. At any time a player may choose instead to take two coins, at which point the game ends and all further coins are lost. You can prove by induction that if both players are perfectly selfish, they will take two coins on their first move, no matter how large the pile is. People find this paradox impossible to swallow because they model perfect selfishness on the most selfish person they can imagine, not on a mathematically perfect selfishness machine. It's nice to have an "intuition pump" that illustrates what genuine selfishness looks like.

5ata
Hmm. We could also put that one in terms of a human or FAI competing against a paperclip maximizer, right? The two players would successively save one human life or create one paperclip (respectively), up to some finite limit on the sum of both quantities. If both were TDT agents (and each knows that the other is a TDT agent), then would they successfully cooperate for the most part? In the original version of this game, is it turn-based or are both players considered to be acting simultaneously in each round? If it is simultaneous, then it seems to me that the paperclip-maximizing TDT and the human[e] TDT would just create one paperclip at a time and save one life at a time until the "pile" is exhausted. Not quite sure about what would happen if the game is turn-based, but if the pile is even, I'd expect about the same thing to happen, and if the pile is odd, they'd probably be able to successfully coordinate (without necessarily communicating), maybe by flipping a coin when two pile-units remain and then acting in such a way to ensure that the expected distribution is equal.

Are you arguing that a few simple rules describe what we're all trying to get at with our morality? That everyone's moral preference function is the same deep down? That anything that appears to be a disagreement about what is desirable is actually just a disagreement about the consequences of these shared rules, and could therefore always be resolved in principle by a discussion between any two sufficiently wise, sufficiently patient debaters? And that moral progress consists of the moral zeigeist moving closer to what those rules capture?

That certainly would be convenient for the enterprise of building FAI.

Paul, do you think that your own morality is optimum or can you conceive of someone more moral than yourself - not just a being who better adheres to your current ideals, but a being with better ideals than you?

Yes I can.

If you take the view that ethics and aesthetics are one and the same, then in general it's hard to imagine how any ideals other than your own could be better than your own for the obvious reason that I can only measure them against my own.

What interests me about the rule I propose (circular preferences are bad!) is that it is exclusively a... (read more)

I'm by no means sure that the idea of moral progress can be salvaged. But it might be interesting to try and make a case that we have fewer circular preferences now than we used to.

The "One Christers" are a nice SF touch.

It's not known whether the Universe is finite or infinite, this article gives more details:

http://en.wikipedia.org/wiki/Shape_of_the_Universe

If the Universe is infinite, then it has always been so even from the moment after the Big Bang; an infinite space can still expand.

It hadn't quite sunk in until this article that looked at from a sum-over-histories point of view, only identical configurations interfere; that makes decoherence much easier to understand.

Would this get easier or harder if you started with, say, gliders in Conway's Life?

How do you apply this approach to questions like "to what extent was underconsumption the cause of the Great Depression?" No conceivable experiment could answer such a question, even given a time machine (unlike, say, "Who shot JFK?") but I think such questions are nevertheless important to our understanding of what to do next.

The best answer I have to such questions is to posit experiments in which we rewind history to a particular date, and re-run it a million times, performing some specific miracle (such as putting money into a billion carefully-chosen wallets) on half a million of those occasions, and gather statistics on how the miracle affects economic indicators.

3pnrjulius
I have a somewhat better way. Place economics on a sufficiently rigorous empirical foundation, so that it is (let us say) somewhere near the level of quantum physics. Having done this (monumental) task, we can now answer questions about historical events in economics as well as we can answer questions about historical events in physics---e.g. "Why is that laser red?" "Why did those interference fringes form here and not there?"

I don't think this answer meets the standards of rigour that you set above, but I'm increasingly convinced that the idea of free will arises out of punishment. Punishment plays a central role in relations among apes, but once you reach the level of sophistication where you can ask "are we machines", the answer "no" gives the most straightforward philosophical path to justifying your punishing behaviour.

2tlhonmey
Why?  If the answer is "no" then applying a proper punishment causes the nebulous whatsit in charge of the person's free will to change their future behaviour. If the answer is "yes" then applying a proper punishment adjusts the programming of their brain in a way that will change their future behaviour. The only way a "yes" makes it harder to justify punishing someone is if you overexpand a lack of "free will" to imply "incapable of learning".

"The old political syllogism "something must be done: this is something: therefore this will be done" appears to be at work here, in spades." -- Charlie Stross

Charlie is quoting the classic BBC TV series "Yes Minister" here.

"I assign higher credibility to an institution if liberals accuse it of being conservative and conservatives accuse it of being liberal." -- Alex F. Bokov

Surprised to see that one there - the world is full of people desperate to ensure that there is a stool either side of them, and that seems like a process very far from hugging the query.

The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money.

Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life.

However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it".

0thrawnca
Perhaps the only appropriate uses for probability 0 and 1 are to refer to logical contradictions (eg P & !P) and tautologies (P -> P), rather than real-world probabilities?

Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.

If you'd like someone to try the random jury approach, you need to think about how to turn it into good TV.

The video notes that when the subject is instructed to write their answers, conformity drops enormously. That suggests we can set aside the hypothesis that they conform for the rational reason you set out.

Recovering irrationalist: I feel the same way. The most interesting book I've read about this is George Ainslie's "Breakdown of Will". Ainslie uses the experimentally verified theory of hyperbolic discounting to build a model of why we do things like make promises to ourselves that we then fail to keep, and othe rforms of behaviour related to "akrasia".

http://books.google.co.uk/books?id=7dTHzcaTRpIC&dq=%22breakdown+of+will%22&pg=PP1&ots=PSAPRlxfkD&sig=4jmS627eK3oMDHtcSVHrZ3IsNoA&hl=en&prev=http://www.google.co.... (read more)

"No. The "unless" clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B."

This can't be right. I have a hundred measuring devices. Ninety are broken and give a random answer with an unknown distribution, while ten give an answer that strongly correlates with the truth. Ninety say A and ten say B. If I examine a random meter that says B and find that it is broken, then surely that has to count as strong evidence against B.

This is probably an unnecessarily subtle point, of course; the overall thrust of the argument is of course correct.

I don't want to say what it is for fear of spoilering it, but is anyone else thinking of the same groundbreaking comic book I am? Perhaps that's the supervillain Eliezer is thinking of...

2beoShaffer
Yes, but only once you brought up comics.

So the point is that the idiots who are directly useless - make no useful contributions, have no ideas, spark nothing good - may be useful because they give shelter for others who want to raise controversial ideas?

I'd want to see a group not already mad that suffered for not having an idiot in their number before I believed it...

Which paper was Merkle talking about, if I may ask?

8gwern
Given the information that it was Ralph Merkle, that it was about his field (=cryptography), that it was intended to be a general overview (so about cryptography broadly construed and not a single result or breakthrough), aimed at outsiders (so little math), it was highly cited (more than most of his papers), and no co-author is mentioned (and you wouldn't expect one for an 'explainer' like that), you should be able to make a good guess with a few seconds in Google Scholar after sorting his papers by citation-count and skimming the titles & summaries of each paper: https://scholar.google.com/scholar?start=10&q=author:%22ralph+merkle%22&hl=en&as_sdt=0,21 My guess would be that it's hit #8 on page 1, "Secure communications over insecure channels", Merkle 1978, written in 1975 simultaneously with his breakthrough work on public-key crypto, which indeed would need to be explained to a lot of people at the time. It's written in a very conversational tone, with historical background and discussion of practical issues and solutions like sending secret keys in the mail, extremely few equations or math (but imperative program pseudocode instead), published in a more general interest publication than the usual cryptography journals, and despite not presenting any new results & being ancient, is still apparently his 8th most cited paper ever out of 162 hits for him as author.
0lsparrish
I'm curious too. Anyone know which paper this is?