Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Furcas 24 September 2016 03:39:19PM 18 points [-]

Donated $500!

Comment author: reguru 15 September 2016 03:57:00PM *  0 points [-]

Was the youtube link video? Do you have the video of the TED talk? Audio is boring, but I can wait.

Comment author: Furcas 15 September 2016 05:01:33PM 0 points [-]

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

Comment author: Manfred 13 September 2016 06:48:56PM 0 points [-]

Thanks for the pointer, though I can't open the audio file either.

Comment author: Furcas 13 September 2016 07:02:34PM *  0 points [-]

I changed the link to the audio, should work now.

Comment author: Furcas 13 September 2016 03:03:27PM *  4 points [-]

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment author: Tem42 09 July 2016 03:13:21PM 0 points [-]

I'm just starting arc 9, and am ready to give up. It's fun enough, but there doesn't seem to be any rationality here. I would buy an argument that the author is rationalist, but not any of the characters so far. (The backstory does suggest there the characters have done research and thought deep thoughts, be we see none of that.)

If it suddenly improves please let me know -- I've heard enough good things from enough people that I kept going this far, and it'd be a pity to quit just before things get interesting. But I'm almost a third of the way through, and still nothing :-/

Comment author: Furcas 09 July 2016 07:26:13PM 2 points [-]

If you don't like it now, you never will.

Comment author: CarlJ 05 July 2016 07:14:50PM 0 points [-]

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of paperclips, but not skip work and simply enjoy some hedonism?

That is, if the AI saw its utility function from a neutral perspective, and understood that the only reason for it to follow its utility function is that utility function (which is arbitrary), and if it then had complete control over itself, why should it just follow its utility function?

(I'm assuming it's aware of pain/pleasure and that it actually enjoys pleasure, so that there is no problem of wanting to have more pleasure.)

Are there any articles that have delved into this question?

Comment author: Furcas 05 July 2016 09:11:17PM 4 points [-]
In response to comment by kilobug on Zombies Redacted
Comment author: gjm 05 July 2016 02:33:08PM -1 points [-]

Quite true, but if you follow the link in Furcas's last paragraph (which may not have been there when you wrote your comment) you will see Eliezer more or less explicitly claiming to have a solution.

In response to comment by gjm on Zombies Redacted
Comment author: Furcas 05 July 2016 06:58:52PM 1 point [-]

Yeah, I edited my comment after reading kilobug's.

In response to comment by Furcas on Zombies Redacted
Comment author: TheAncientGeek 04 July 2016 01:57:21PM *  0 points [-]

Snarky, but pertinent.

This re-posting was prompted by a Sean Carroll article, that argued along similar lines...epiphenomenalism (one of a number of possible alternatives to physicalism) is incredible, therefore no zombies.

There are a number of problems with this kind of thinking.

One is that there may be better dualisms than epiphenomenalism.

Another is that criticising epi. doesn't show that there is a workable physical explanation of consciousness. There is no see-saw (titter-totter) effect whereby the wrongness of one theory implies the correctness of another. For one thing,there are more than two theories (see above). For another, an explanation has to explain...there are positive, absolute standards for explanation..you cannot say some Y is an an explanation, that it actually explains, just because some X is wrong, and Y is different to X. (The idea that physicalism is correct as an incomprehensible brute fact is known as the "new mysterianism" and probably isn't what reductionists physicalists and rationalists are aiming at).

Carroll and others have put forward a philosophical version of a physical account of consciousness, one stating in general terms that consciousness is a high-level, emergent outcome, of fine-grained neurological activity. The zombie argument (Mary's room, etc) are intended as handwaving philosophical arguments against that sort of argument. If the physicalist side had a scientific version of a physical account of consciousness, there would be no point in arguing against them philosophically, any more than there is a point in arguing philosophically against gravity. Scientific, as opposed to philosophical, theories are detailed and predictive, which allows them to be disproven or confirmed and not merely argued for or against.

And, given that there is no detailed, predictive explanation of consciousness, zombies are still imaginable, in a sense. If someone claims they can imagine (in the sense of picturing) a hovering rock, you can show that it is not possible by writing down some high school physics. Zombies are imaginable in a stronger sense: not only can they be pictuured, but the picture cannot be refuted.

Comment author: Furcas 04 July 2016 02:48:57PM 1 point [-]

Ahh, it wasn't meant to be snarky. I saw an opportunity to try and get Eliezer to fess up, that's all. :)

In response to Zombies Redacted
Comment author: Furcas 02 July 2016 10:20:24PM *  7 points [-]

Nice.

So, when are you going to tell us your solution to the hard problem of consciousness?

Edited to add: The above wasn't meant as a sarcastic objection to Eliezer's post. I'm totally convinced by his arguments, and even if I wasn't I don't think not having a solution to the hard problem is a greater problem for reductionism than for dualism (of any kind). I was seriously asking Eliezer to share his solution, because he seems to think he has one.

Comment author: Furcas 13 May 2016 03:51:44PM 0 points [-]

IMO since people are patterns (and not instances of patterns), there's still only one person in the universe regardless of how many perfect copies of me there are. So I choose dust specks. Looks like the predictor isn't so perfect. :P

View more: Next