Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.
a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.
A GAI with the utility of burning itself? I don't think that's viable, no.
What do you mean by "viable"? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?
As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said "why would an AI try to maximize its original utility function, i...
I don't care what other people are convinced.
When you said above that status was the real reason LW-associates oppose legal polygamy, you were implying that these people are not actually convinced of these issues, or only pretend to care about them for status reasons.
I'm in a happy polygamous relationship and I know I'm not the only one.
Certainly! I'd like to clarify that I don't think polyamory is intrinsically oppressive, and that I am on the whole pretty darn progressive (philosophically) regarding sexual / relationship rights etc. (That is, I th...
Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.
Regarding your example, I think what Mills is saying is probably a fair point - or rather, it's probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.
I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad.
Same argument can and has been applied to other kinds of marriage.
On the one hand, the argument doesn't need to be correct to be the (or a) real reason. On the other, I'd expect more people to be more convinced that polygamy is more oppressive (as currently instantiated) than vanilla marriage (and other forms, such as arranged marriages or marriage of children to adults, are probably more strongly opposed).
thus we tend to see forbidding that as a bad idea.
ITYM 'good'?
I've certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad. Would you say this is rationalization?
FWIW I'm very skeptical of the whole "status explains everything" notion in general.
or that polyamory is when it's done by fashionable white people, and polygamy is when it's done by weird brown foreigners
I thought it was "polyamory is when it's done by New Yorkers (Californians?), polygamy is when it's done by Utahans," and weird brown people have harems and concubines instead.
(Though of course I also don't think this is a fair characterization)
...Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a "face recognition" module when it determines that this would be useful for interacting with humans?
The ancient Greeks have developed methods of improved memorization. It has been shown that human-trained dogs and chimps are more capable of human-face recognition than others of their kind. None of them were artificial (discounting selective breeding in dogs and Greeks).
It seems that you should be able to write a simple program that o
Thanks for challenging my position. This discussion is very stimulating for me!
It's a pleasure!
...Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?
I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into
knowing the value of Current Observation gives you information about Future Decision.
Here I'd just like to note that one must not assume all subsystems of Current Brain remain constant over time. And what if the brain is partly a chaotic system? (AND new information flows in all the time... Sorry, I cannot condone this model as presented.)
Well... okay, but the point I was making was milder and pretty uncontroversial. Are you familiar with bayesian networks?
...Perhaps it can observe your neurochemistry in detail and in real time.
I already mentioned
Ah! Sorry for the mixed-up identities. Likewise, I didn't come up with that "51% chance to lose $5, 49% chance to win $10000" example.
But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn't work on its own, though it still looks like we don't know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.
Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won't make the wrong dec...
But the median outcome is losing 5 utils?
Edit: Oh, wait! You mean the median total utility after some other stuff happens (with a variance of more than 5 utils)?
Suppose we have 200 agents, 100 of which start with 10 utils, the rest with 0. After taking this offer, we have 51 with -5, 51 with 5, 49 with 10000, and 49 with 10010. The median outcome would be a loss of -5 for half the agents, a gain of 5 for half, but only the half that would lose could actually get that outcome...
And what do you mean by "the possibility of getting tortured will manifest...
...You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?
I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?).
Unlike my (present) traits, my future decisions don't yet exist, and hence cannot leak anything or become entangled with anyone.
Your future decisions are entangled with your present traits, and thus can leak. If you picture a Bayesian network with the nodes "Current Brain", "Future Decision", and "Current Observation", with arrows from Current Brain to the two other nodes, then knowing the value of Current Observation gives you information about Future Decision.
Obviously the alien is better than a human at running this game...
Having asserted that your claim is, in fact, new information
I wouldn't assert that. I thought I was stating the obvious.
Yes, I think I misspoke earlier, sorry. It was only "new information" in the sense that it wasn't in that particular sentence of Eliezer's - to anyone familiar with discussions of GAI, your assertion certainly should be obvious.
You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?
Generally GAIs are ascribed extreme powers around here - if it has low-level access to its code, then it will be able to determine how its "desires" derive from this code, and will be able to produced whatever changes it wants. Similarly, it will be able to hack human brains with equal finesse.
Not meant as an attack. I'm saying, "to be fair it didn't actually say that in the original text, so this is new information, and the response is thus a reasonable one". Your comment could easily be read as implying that this is not new information (and that the response is therefore mistaken), so I wanted to add a clarification.
But 'value is fragile' teaches us that it can't be a 1-dimensional number like the reals.
This is not in fact what "value is fragile" teaches us, and it is false. Without intending offense, I recommend you read about utility a bit more before presenting any arguments about it here, as it is in fact a 1-dimensional value.
What you might reasonably conclude, though, is that utility is a poor way to model human values, which, most of the time, it is. Still, that does not invalidate the results of properly-formed thought experiments.
To be fair, when structured as
Sadly, we humans can't rewrite our own code, the way a properly designed AI could.
then the claim is in fact "we humans can't rewrite our own code (but a properly designed AI could)". If you remove a comma:
Sadly, we humans can't rewrite our own code the way a properly designed AI could.
only then is the sentence interpreted as you describe.
The power, without further clarification, is not incoherent. People predict the behavior of other people all the time.
Ultimately, in practical terms the point is that the best thing to do is "be the sort of person who picks one box, then pick both boxes," but that the way to be the sort of person that picks one box is to pick one box, because your future decisions are entangled with your traits, which can leak information and thus become entangled with other peoples' decisions.
Well, it's a thought experiment, involving the assumption of some unlikely conditions. I think the main point of the experiment is the ability to reason about what decisions to make when your decisions have "non-causal effects" - there are conditions that will arise depending on your decisions, but that are not caused in any way by the decisions themselves. It's related to Kavka's toxin and Parfit's hitchhiker.
Well, try using numbers instead of saying something like "provided luck prevails".
If p is the chance that Omega predicts you correctly, then the expected value of selecting one box is:
1,000,000(p) + 0(1-p)
and the expected value of selecting both is:
1,000(p) + 1,001,000(1-p)
So selecting both is only higher expected value if Omega guesses wrong about half the time or more.
I read this as "people who aren't ( (clownsuit enjoyers) and (autistic) ) ...", but it looks like others have read it as "people who aren't (clownsuit enjoyers) and aren't (autistic)" = "people who aren't ( (clownsuit enjoyers) or (autistic) )", which might be the stricter literal reading. Would you care to clarify which you meant?
It certainly could be - I read the anecdote from a book I picked idly off a shelf in a bookstore, and I retained the vague impression that it was from a book about the importance of social factors and the effects of technology on our social/psychological development, but I could have been conflating it with another such book. After reading an excerpt from "The Boy who was Raised as a Dog", the style matches, so that probably was the one I read. Would you recommend it?
I heard a horror story (anecdote from a book, for what it's worth) of a child basically raised in front of a TV, who learned from it both language and a general rule that the world (and social interaction) is non-interactive. If you could get his attention, he'd cheerfully recite some memorized lines then zone out.
My take on it is - "rationality" isn't the point. Don't try to do things "rationally" (as though it's a separate thing), try to do them right.
It's actually something we see with the nuts that occasionally show up here - they're obsessed with the notion of rationality as a concrete process or something, insisting (e.g.) that we don't need to look at the experimental evidence for a theory if it is "obviously false when subjected to rational thought", or that it's bad to be "too rational".
I agree with your analysis, and further:
Gurer ner sbhe yvarf: gur gjb "ebgngvat" yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.