Comment author: nigerweiss 24 February 2014 01:57:36AM 0 points [-]

It's going to be really hard to come up with any models that don't run deeply and profoundly afoul of the Occam prior.

Comment author: advancedatheist 20 August 2013 03:23:23PM 0 points [-]

You can start here to see where the practice of cryonics should go to get out of its pseudoscience and quackery morass:

http://chronopause.com/chronopause.com/index.php/2012/05/20/cryonics-intelligence-test-responses/index.html

I still have the scientific papers Mike Darwin sent me for his "cryonics intelligence test." Email me and I can send you the ones I consider instructive: mark.plus@rocketmail.com

Comment author: nigerweiss 20 August 2013 08:16:44PM 8 points [-]

When asked a simple question about broad and controversial assertions, it is rude to link to outside resources tangentially related to the issue without providing (at minimum) a brief explanation of what those resources are intended to indicate.

Comment author: nigerweiss 25 July 2013 09:37:26AM 1 point [-]

I don't speak Old English, unfortunately. Could someone who does please provide me with a rough translation of the provided passage?

Comment author: shminux 28 June 2013 01:53:11AM -1 points [-]

if your bad argument gets refuted, you lose whatever credibility you may have had.

Comment author: nigerweiss 29 June 2013 09:46:55AM 0 points [-]

It isn't the sort of bad argument that gets refuted. The best someone can do is point out that there's no guarantee that MNT is possible. In which case, the response is 'Are you prepared to bet the human species on that? Besides, it doesn't actually matter, because [insert more sophisticated argument about optimization power here].' It doesn't hurt you, and with the overwhelming majority of semi-literate audiences, it helps.

Comment author: shminux 26 June 2013 03:50:32PM -1 points [-]

There is no need to use known bad arguments when there are so many good ones.

Comment author: nigerweiss 28 June 2013 01:04:21AM 0 points [-]

Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we're not trying to feel smug in our rational superiority here; we're trying to save the world.

Comment author: shminux 25 June 2013 05:07:54PM -1 points [-]

Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse.

This is not a great argument, given that it works equally well if you replace MNT with God/Devil in the above.

Comment author: nigerweiss 26 June 2013 08:10:34AM 0 points [-]

That's... not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT - I'd bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.

All we're doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There's nothing wrong with that. Would we need to do it if people were rational agents? No - but, as you may be aware, we definitely don't live in that universe.

Comment author: nigerweiss 25 June 2013 10:43:50AM 0 points [-]

I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.

Comment author: nigerweiss 07 June 2013 10:02:18PM *  2 points [-]

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI predicts will result from that change) don't look palatable to you, the AI won't make the first change. In other words, the AI will never assign you a value set that you find objectionable right now. This is safe in some sense, but not ideal. The profoundly racist will never accept a version of their values which, because of its exposure to more data and fewer cognitive biases, isn't racist. Ditto for the devoutly religious. This model of CEV doesn't offer the opportunity for growth.

It might be wise to compromise by locking the maximum number of edges in the graph between you and your EV to some small number, like two or three - a small enough number that value drift can't take you somewhere horrifying, but not so tightly bound up that things can never change. If your CEV says it's okay under this schema, then you can increase or decrease that number later.

Comment author: B_For_Bandana 03 June 2013 01:36:03AM 4 points [-]

I'm someone who still finds subjective experience mysterious, and I'd like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?

Comment author: nigerweiss 06 June 2013 09:39:32AM 1 point [-]

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

Comment author: TheOtherDave 29 May 2013 08:57:41PM 2 points [-]

If we're going to be picky, also the idea that only neurons are relevant isn't right; if you replaced each neuron with a neuron-analog (a chip or a neuron-emulation-in-software or something else) but didn't also replace the non-neuron parts of the cognitive system that mediate neuronal function, you wouldn't have a working cognitive system.
But this is a minor quibble; you could replace "neuron" with "cell" or some similar word to steelman your point.

Comment author: nigerweiss 30 May 2013 01:34:19AM 1 point [-]

Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

View more: Next