Comment author: TheAncientGeek 12 August 2016 01:40:35PM *  0 points [-]

No, because not all norms are moral norms.

Intuition is perception.

That's actually a pretty contentious claim.

Comment author: hairyfigment 13 August 2016 04:59:51PM 0 points [-]

Non-mural? Nein!

Comment author: Lumifer 10 August 2016 02:43:30PM 0 points [-]

As far as I know, yes it would.

Citation is still needed :-) Do you have a link to that study?

Have you ever noticed that we have lots of evidence that slimmer people tend to be healthier, but not that losing weight makes you healthier?

Let me see if I read your position correctly. We know that slim people are healthier than fat people, right? We know that getting fat worsens your health -- I believe this is fairly uncontroversial -- do you wish to contest that? But you are saying that this is a ratchet, losing fat will not make your health any better. In other words, once you gained weight there is no path back to health ever?

That seems a rather strong statement to me and I haven't seem much support for that being true in reality. Why did you pick this position as your prior?

Comment author: hairyfigment 10 August 2016 11:55:25PM 1 point [-]
Comment author: James_Miller 08 August 2016 08:57:35PM 2 points [-]

We perhaps shouldn't give much weight to PZ Myers' viewpoint. See this Slatestarcodex article.

Comment author: hairyfigment 08 August 2016 09:14:18PM 0 points [-]

How is that a response? Is it what you plan to tell the media?

Comment author: James_Miller 07 August 2016 06:06:16PM 4 points [-]

Yes, this does reduce the benefit of getting Trump to support mosquito eradication.

Comment author: hairyfigment 08 August 2016 05:51:44PM 0 points [-]

More to the point, PZ Myers has already come out against it on ecological grounds (though that was probably some years ago.) This would solidify him in that position if he hasn't already changed his mind. Now, if it's Trump vs scientists, what will happen?

Comment author: thrawnca 27 July 2016 01:45:53AM -1 points [-]

If you believe in G-d then you believe in a being that can change reality just by willing it

OK, so by that definition...if you instead believe in a perfect rationalist that has achieved immortality, lived longer than we can meaningfully express, and now operates technology that is sufficiently advanced to be indistinguishable from magic, including being involved in the formation of planets, then - what label should you use instead of 'G-d'?

Comment author: hairyfigment 27 July 2016 05:30:18AM 3 points [-]

Khepri Prime, if the sequel to "Worm" goes the way I hope. More seriously, I don't believe any of that, and physics sadly appears to make some of it impossible even in the far future. Most of us would balk at that first word, "perfect," citing logical impossibility results and their relation to idealized induction. So your question makes you seem - let us say disconnected from the discussion. Would you happen to be assuming we reject theism because we see it as low status, and not because there aren't any gods?

Comment author: entirelyuseless 23 July 2016 03:11:02AM -1 points [-]

Yes, I noticed he overlooked the distinction between "I know I am conscious because it's my direct experience" and "I know I am conscious because I say 'I know I am conscious because it's my direct experience.'" And those are two entirely different things.

Comment author: hairyfigment 27 July 2016 01:20:45AM 2 points [-]

The first of those things is incompatible with the Zombie Universe Argument, if we take 'knowledge' to mean a probability that one could separate from the subjective experience. You can't assume that direct experience is epiphenomenal, meaning it doesn't cause any behavior or calculation directly, and then also assume, "I know I am conscious because it's my direct experience".

If it seems unfair to suggest that Chalmers doesn't know he himself is conscious, remember that to our eyes Chalmers is the one creating the problem; we say that consciousness is a major cause of our beliefs about consciousness.

In response to That Alien Message
Comment author: Mader_Levap 25 July 2016 07:32:57PM *  -1 points [-]

"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."

Limits? I can think up few on the spot already.

Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".

Feats:

  • Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.

  • Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant's philosophy.

  • Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.

  • Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).

I guess some people are so used to think about AI as magic omnipotent technogods they don't even notice it. Sad.

Comment author: hairyfigment 26 July 2016 05:42:03AM 2 points [-]

As far as environment goes, the context says exactly the opposite of what you suggest it does.

Among your bullet points, only the first seems well-defined. I could try to discuss them anyway, but I suggest you just read up on the subject and come back. Eliezer's organization has a great deal of research on self-understanding and theoretical limits; it's the middle icon at the top right of the page.

In response to comment by Jiro on Crazy Ideas Thread
Comment author: James_Miller 20 June 2016 04:51:34PM 2 points [-]

You only fight over things that are valuable.

Comment author: hairyfigment 20 June 2016 05:58:27PM 0 points [-]

Not so (or poorly defined); if you want to hurt someone, you can fight over things that would have been valuable to them if you hadn't fought.

In response to comment by Tiiba on Welcome to Heaven
Comment author: GodParty 20 June 2016 12:20:26AM 0 points [-]

Sentience is exactly just the ability to feel. If it can feel joy, it is sentient.

In response to comment by GodParty on Welcome to Heaven
Comment author: hairyfigment 20 June 2016 05:54:18PM 1 point [-]

Yes, but for example in highway hypnosis people drive on 'boring' stretches of highway and then don't remember doing so. It seems as if they slowly lose the capacity to learn or update beliefs even slightly from this repetitive activity, and as this happens their sentience goes away. So we haven't established that the sentient ball of uniform ecstasy is actually possible.

Meanwhile, a badly programmed AI might decide that a non-sentient or briefly-sentient ball still fits its programmed definition of the goal. Or it might think this about a ball that is just barely sentient.

Comment author: sh4dow 19 June 2016 12:30:31AM 1 point [-]

I would play lotto: if I win more than 10M$, I take the black box and leave. Otherwise I'd look in the black box: if it is full, I also take the small one. If not, I leave with just the empty black box. As this should be inconsistent, assuming a time traveling Omega, it would either make him not choose me for his experiment or let me win for sure (assuming time works in similar ways as in HPMOR). If I get nothing, it would prove the Omega wrong (and tell me quite a bit about how the Omega (and time) works). If his prediction was correct though, I win 11.000.000$, which is way better than either 'standard' variant.

Comment author: hairyfigment 20 June 2016 05:38:09PM 1 point [-]

While that sounds clever at first glance:

  • We're not actually assuming a time-traveling Omega.

  • Even if we were, he would just not choose you for the game. You'd get $0, which is worse than causal decision theory.

View more: Prev | Next