Comment author: [deleted] 10 May 2015 06:45:19PM 1 point [-]

Well of course, talking of doing what is good without giving content to the phrase isn't very precise or helpful, either. I certainly expect that if we build a "friendly superintelligence" and successfully program it to do what is good, I will experience a higher baseline level of happiness on a daily basis than if we don't (because, for example, we will be able to ask the AI how to cure depression). It needs saying that while The Good strongly implies (high likelihood/high log-odds) high broad levels of happiness throughout the population, happiness alone is very weak evidence (low but positive log-odds, likelihood nearer to 0.5) of The Good, insofar as the abstraction doesn't leak.

But, and this is an important point, if you give me a normative-ethical theory of The Good which implies that I specifically, or the population broadly, ought to be unhappy, or a meta-ethical theory of naturalizing morality which outputs a normative theory which implies that I/we ought to be unhappy, then something has gone very, very wrong.

Comment author: nshepperd 11 May 2015 09:36:56AM 2 points [-]

Using "good" to only refer to what is actually good is however vastly better, as precision goes. What I am taking issue to here is the careless equivocation between maximising pleasure and good intentions. A correct description of the "nanny AI" scenario would read something like this:

[The AI] has bad intentions (it was programmed to maximise human pleasure), and indeed by using its superior intelligence it successfully achieves that goal and does in fact maximise human pleasure -- by connecting all human brains up to dopamine drips.

Of course it is true that a AI programmed to do what is good would most likely generally increase happiness (and even pleasure) to some extent, but to conclude from that that these things are interchangeable is pure folly.

Comment author: Richard_Loosemore 10 May 2015 07:34:24PM -1 points [-]

The lack of understanding in this comment is depressing.

You say:

"No. The AI does not have good intentions. Its intentions are extremely bad."

If you think this is wrong, take it up with the people whose work I am both quoting and analyzing in this paper, because THAT IS WHAT THEY ARE CLAIMING. I am not the one saying that "the AI is programmed with good intentions", that is their claim.

So I suggest you write a letter to Muehlhauser, Omohundro, Yudkowsky and the various others quoted in the paper, explaining to them that you find their lack of precision depressing.

Comment author: nshepperd 11 May 2015 09:19:31AM *  5 points [-]

If that's the case, then please enclose that sentence in quotes and add a citation. Note that a quote saying that the AI was programmed to maximise happiness (or indeed, pleasure, as that is what the original quote described) is insufficient because, as is my whole point, "happiness" and "good" are different things.

And then add a sentence, not in quotes, claiming that the AI does not have good intentions, instead of one claiming that the AI has good intentions.

Or perhaps, as I suspect, you still believe that you can carelessly rephrase "programmed to maximise human pleasure" into "has good intentions" without anyone noticing that you are putting words in mouths?

Comment author: nshepperd 10 May 2015 06:01:59AM 12 points [-]

This article just makes the same old errors over and over again. Here's one:

"An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip [and] almost any easy solution that one might imagine leads to some variation or another on the Sorcerer’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire." (Marcus 2012)

He is depicting a Nanny AI gone amok. It has good intentions (it wants to make us happy) but the programming to implement that laudable goal has had unexpected ramifications, and as a result the Nanny AI has decided to force all human beings to have their brains connected to a dopamine drip.

No. The AI does not have good intentions. Its intentions are extremely bad. It wants to make us happy, which is a completely distinct thing from actually doing what is good. The AI was in fact never programmed to do what is good, and there are no errors in its code.

The lack of precision here is depressing.

Comment author: nshepperd 08 May 2015 02:40:47PM 2 points [-]

There's an argument, which I find somewhat persuasive, that the usual belief that one is "not a math person" stems from learned helplessness, from many years of being forced to attempt difficult mathematical tasks in school without the required grounding. Mathematics, or at least the parts that are taught in standard curricula, is a very linear subject. Failure to grasp eg. fractions in the semester they are introduced could conceivably haunt a student for the rest of their school career, as it makes it difficult to understand essentially everything that follows.

If this theory of learned helplessness is correct, then perhaps if Scott could be convinced to complete the Khan Academy math courses he could be cured :)

Comment author: dxu 22 March 2015 04:00:30PM *  0 points [-]

I imagine you'd need a rather more devious brain modification to prevent one from carrying out these steps correctly, in such a way that the result is SSS0.

All right. Let me take a stab at it.

S(0) + SSS0

(move the S to the right)

Okay. Following you so far...

0 + SSSS0

Eh? Where'd you get the extra "S" from?

(This hack would have the unfortunate side effect of making every addition with at least one term less than or equal to 3 and a result greater than 3 come out to 1 less than it's supposed to, however. If you wanted to only make 2 + 2 = 3, and preserve all other additions as-is, I can't think of any brain hack that could do that. That's not to say no such hack is possible; I'm sure one is, but I just can't think of one.)

Comment author: nshepperd 23 March 2015 02:35:05AM *  0 points [-]

This hack would have the unfortunate side effect of making every addition with at least one term less than or equal to 3 and a result greater than 3 come out to 1 less than it's supposed to, however.

I think it would even result in any addition with a term ≤ 3 and result > 3 come out to exactly 3, unless you have some sort of rule for S + SSS0 sometimes becoming SSSS0 instead of SSS0.

Note also that an enterprising soul can line up the two steps:

S0 + SSS0
⁠ 0 + SSS0

And notice that they are confused, because the SSS0's are identical, even though they shouldn't be, because Sx + y = x + Sy was the rule applied and Sy ≠ y.

A brain hack that made all of this work is surely possible, of course, but it seems like it would have to be a bit more systematic.

Comment author: dxu 22 March 2015 05:57:42AM *  0 points [-]

I see a lot of people arguing that "2", "3", "+", and "=" are defined in terms of the Peano axioms, and as such, aren't actually relevant to the behavior of physical objects. They say that the axioms pin down the numbers, regardless of how physical objects behave or start behaving.

But the Peano axioms use something called a "successor" to generate the natural numbers. And how do we figure out what the successors are? Well, one notation is to append an "S" to the previous number to indicate that number's successor, such that the successor of "SS0" is "SSS0". Then "+" would mean "repeatedly apply the successor operation to the number to the right of this operator for however many times indicated by the number to the left of this operator", and "=" would indicate "these two numbers, when written out using successor-notation, have the character 'S' occur the same number of times".

So how do we calculate "2 + 2"? Well, we take the number of occurrences of the character "S", count them up, then put the total number of occurrences in front of a single "0": "SSSS0", which we interpret as "4". "2 + 2 = 4".

But now let's imagine your brain's visual cortex (or whatever it uses to visualize things) being messed up somehow. Actually, let's go further than that, and suppose everyone's visual cortexes got messed up in the same way, so that now, when we visualize two objects and two more objects, and put them together, we see, not four objects, but three. So if you were to try and visualize a group of two dots coming together with another group of two dots, you would end up seeing, in your mind's eye, a final group of three dots.

Now imagine counting the number of times "S" occurs in "SS0", then putting two of those groups of "S" characters together. How many "S" character are in the final group? Visualize it, now...

Why, three. Of course there's three. If you take "SS" and put it with "SS", of course you get "SSS". What's that? You say you're getting "SSSS"? Where are you getting that extra "S" from? What? No--can't you see it? It's obviously three--how can you put those two groups together and get four?

So, "SS0 + SS0 = SSS0".

My point is this: you can't just point to the Peano axioms and say, "Ha! Your hypothetical situation involving the behavior of mere physical objects is meaningless before the might of my absolute mathematical assumptions!" Remember, when you try to perform "logic" on those axioms, you're still using your brain to do it. And however ivory-tower untouchable you imagine your axioms to be, your brain is a real, physical object performing real physical computations. If we lived in a slightly different universe, your brain would take one look at "SS0" and "SS0", visually add together the number of "S" characters, and see "SSS0". Anyone who saw "SSSS0" would be seen as crazy, or brain-damaged, or something.

Physics trumps math and logic.

Comment author: nshepperd 22 March 2015 12:27:48PM 0 points [-]

Um, but the (+) operator in peano arithmetic is actually defined in terms of Sx + y = x + Sy. It would be somewhat circular to "suggest counting the S's up" in a method of defining numbers, after all. So the way you calculate 2 + 2 is more like

SS0 + SS0

(thing on the left starts with an S)

S(S0) + SS0

(move the S to the right)

S0 + SSS0

(thing on the left starts with an S)

S(0) + SSS0

(move the S to the right)

0 + SSSS0

(eliminate "0 + " with axiom)

SSSS0

I imagine you'd need a rather more devious brain modification to prevent one from carrying out these steps correctly, in such a way that the result is SSS0.

Comment author: shminux 09 March 2015 06:01:38AM *  1 point [-]

It has been shown experimentally (by HP and DM) that magic is genetic, though LV/QQ might not know that. So, as long as her eye color remains the same, so will her magic.

Comment author: nshepperd 09 March 2015 06:31:49AM *  6 points [-]

Since there are rituals that involve the permanent sacrifice of a "portion" of one's magic, it would seem plausible that the Source of Magic has some sort of accounting system for this purpose. And that resurrecting someone normally would not necessarily restore the initial "balance" (which was presumably revoked when the Source detected their "death"). Even if the initial balance is determined by your genetics.

In response to You Only Live Twice
Comment author: Jiro 22 December 2014 08:48:07PM 4 points [-]

Cryonics is usually funded through life insurance. ... it doesn't take all that much money.

Insurance is a way to avoid catastrophic losses. It is not a way to reduce costs. On the average, an insurance company's customer will pay more in premiums than the amount paid out by the policy. If $X is too much money, $X is too much money even if paid by insurance.

I pay $180/year for more insurance than I need

If you're paying for more insurance than you need, and it's enough more to pay for $X worth of cryonics, it is also enough more to pay for $X of something else. Money is not free just because it comes out of waste; there is still the opportunity cost of not being able to use it for something else once you stop wasting it.

There are programs advertised to "securely erase" hard drives using many overwrites of 0s, 1s, and random data. But if you want to keep the secret on your hard drive secure against all possible future technologies that might ever be developed, then cover it with thermite and set it on fire. It's the only way to be sure.

Hard drives don't decay, not in the time period covered by the analogy. All that is erased is what you specifically erase. A proper analogy to what happens to the brain after death would be some process that affects all parts of the hard drive whether someone specifically chose them or not. Thermite is actually a pretty good one--death is a lot more like erasing a drive using thermite than erasing it by overwriting it with 0s and 1s.

I also see no reason why future technologies will be able to recover a drive overwritten with 0s and 1s. Erasure and recovery are asymmetrical; you can't improve the erasure method and always be able to make up for that by improving the recovery method. If it's really erased, it's really erased.

Not signing up for cryonics - what does that say? That you've lost hope in the future. That you've lost your will to live. That you've stopped believing that human life, and your own life, is something of value. ... The first statement is about systematically valuing human life. ... The second statement is that you have at least a little hope in the future.

Notice something all these statements do? They imply that probabilities are irrelevant. You just need to have hope in the future--any finite quantity of hope will do, it just has to be a little. You just need to value human life; the probability of getting that value doesn't matter. For all that proponents of cryonics claim they are not actually advocating Pascal's mugging, suggesting that people should buy cryonics on the grounds that it has some chance of letting you live--and that the size of that chance doesn't matter--is a recipe for Pascal's mugging.

In response to comment by Jiro on You Only Live Twice
Comment author: nshepperd 23 December 2014 01:34:54PM *  1 point [-]

Thermite is actually a pretty good one--death is a lot more like erasing a drive using thermite than erasing it by overwriting it with 0s and 1s.

Just dying isn't much like erasing a drive with thermite. Damage from ischemia takes time. It's not like your brain instantly turns into pudding the minute the nearest doctor says "time of death". Now, dying and then rotting in the ground somewhere for 50 years is a lot more like erasing a drive using thermite than overwriting it. That's the point of cryonics.

Edit RE insurance:

Insurance is a way to avoid catastrophic losses. It is not a way to reduce costs. On the average, an insurance company's customer will pay more in premiums than the amount paid out by the policy. If $X is too much money, $X is too much money even if paid by insurance.

Of course this is all true. However in the case of life insurance it is also a way to offload the expense to your future self, who presumably has more income than you. If I had to pay the whole thing upfront it would be certainly impossible for me to get cryonics at my current age.

Actually, now that I think about it, it is potentially not true that you would pay more in premiums than the payout, since insurance companies can make a profit on people who let their insurance lapse before dying (which is apparently quite frequent in life insurance). Picking two random life insurance company's websites, it looks like a healthy human of my age could pay as little as 75% of the payout in premiums, assuming a life expectancy of 70 years.

Comment author: Lumifer 17 December 2014 07:21:12PM 2 points [-]

so temperature is in the mind

I am not quite sure in which way this statement is useful.

"..and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing." -- Douglas Adams

Comment author: nshepperd 18 December 2014 06:23:09AM 1 point [-]

I am not quite sure in which way this statement is useful.

Is that because you didn't read the rest of the post?

"Temperature is in the mind" doesn't mean that you can make a cup of water boil just by wishing hard enough. It means that whether or not you should expect a cup of water to boil depends on what you know about it.

(It also doesn't mean that whether an ice cube melts depends on whether anyone's watching. The ice cube does whatever the ice cube does in accordance with its initial conditions and the laws of mechanics.)

Comment author: DanielLC 18 December 2014 04:18:47AM 2 points [-]

Average kinetic energy always corresponds to average kinetic energy, and the amount of energy it takes to create a marginal amount of entropy always corresponds to the amount of energy it takes to create a marginal amount of entropy. Each definition corresponds perfectly to itself all of the time, and applies to the other in the case of idealized objects. How is one more general?

Comment author: nshepperd 18 December 2014 06:15:54AM *  1 point [-]

Two systems with the same "average kinetic energy" are not necessarily in equilibrium. Sometimes energy flows from a system with lower average kinetic energy to a system with higher average kinetic energy (eg. real gases with different degrees of freedom). Additionally "average kinetic energy" is not applicable at all to some systems, eg. ising magnet.

View more: Prev | Next