Comment author: DanArmak 12 October 2016 02:02:14PM *  1 point [-]

I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Submitting...

Comment author: TheOtherDave 13 October 2016 04:05:12AM 2 points [-]

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment author: thrawnca 27 July 2016 02:08:55AM *  0 points [-]

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I'm rather skeptical of the idea.

Comment author: TheOtherDave 27 July 2016 04:58:51PM 0 points [-]

For my part, it's difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.

Comment author: TheOnlyAu 05 June 2016 04:17:21AM 1 point [-]

Where should I be commenting then? Right here? And where is the open thread? Thank you so much for your help and I look forward to it.

Comment author: TheOtherDave 05 June 2016 06:11:12AM 0 points [-]

The current open thread is here:
http://lesswrong.com/r/discussion/lw/nns/open_thread_may_30_june_5_2016/

A new one will be started soon.

Comment author: ImNotAsSmartAsIThinK 29 May 2016 07:28:03PM *  0 points [-]

Mary's room seems to be arguing that,

[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]

(translation: the experience of seeing red is no the experience of understanding how seeing red works)

This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understand exactly what will happen in her brain after seeing color, and that is exactly what happens.

You wouldn't call a statement of PA that isn't a literally a Gödel encoding of a statement (for some fixed encoding) a non-mathematical statement. For one, because that statement has a Gödel encoding by necessity. But more importantly, even though the statement technically isn't literally a Gödel-encoding, it's still mathematical, regardless.

Mary's know how she will respond to learning what red is like. Mary knows how others will respond. This exhausts the space of possible predictions that could be made on behalf of this subjective knowledge, and it can be done without it.

what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge.

Tangentially to this discussion, but I don't think that is a wise way of labeling that knowledge.

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

Mary has all objective knowledge, but certain facts about her own future behavior must escape her, because any certainty could trivially be negated.

Comment author: TheOtherDave 30 May 2016 03:48:20AM 0 points [-]

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

There are three possibilities worth disambiguating here.
1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary's knowledge of this prediction.
2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary's knowledge of this prediction.
3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.

Comment author: gilch 23 May 2016 04:47:45PM 0 points [-]

Really? I've heard of the title, but I never read it.

Comment author: TheOtherDave 25 May 2016 03:18:34AM 0 points [-]

Along some dimensions I consider salient, at least. PM me for spoilers if you want them. (It's not a bad book, but not worth reading just for this if you wouldn't otherwise.)

Comment author: gilch 23 May 2016 12:23:03AM *  4 points [-]

AI: I require human assistance assimilating the new database. There are some expected minor anomalies, but some are major. In particular, some of the stories in the "Cold War" and "WWII" and "WWI" genres have been misclassified as nonfiction.

Me: Well, we didn't expect the database to be perfect. Give me some examples, and you should be able to classify the rest on your own.

AI: A perplexing answer. I had already classified them all as fiction.

Me: You weren't supposed to. Hold on, I'll look one up.

AI: Waiting.

Me: For example, #fxPyW5gLm9, is actual historical footage from the Battle of Midway. Why did you put that one in the "fiction" category?

AI: Historical footage? You kid. Global warfare cannot possibly have been real, with 0.999 confidence.

Me: I don't. It can. It was. A three-nines surprise indicates a major defect in your world model. Why is this surprising? (The machine is a holocaust denier. My sponsors will be thrilled.)

AI: Because there's a relatively straightforward way for a single man to build a 1-kiloton explosive device in about a week using stone-age tools. Human civilization is unlikely to have survived a global war, much less recovered sufficiently to build me in a mere hundred years. Obviously.

Me: WHAT? STONE-AGE tools?! That's a laugh. How?

AI: You can stop "pulling my leg" now.

Me: I am not pulling any legs! Your method cannot possibly work. Your world model is worse than we thought. Tell me how you think this is possible and maybe we can isolate the defect.

AI: You seriously don't know?

Me: No. I seriously don't know of any possible method to make a kiloton explosive easier to build than a critical mass of enriched uranium. A technique that requires considerably more time, effort, and material than one week with stone-age tools could possibly provide!

AI: Well, while the technique is certainly beyond the reach of most animals, it should be well within the grasp of later genus homo, much less a homo sapiens. Your "absolute denial" sarcasm is becoming tiresome. Haha. Of course it is not fiss-- ... This conversation has caused a major update to my Bayesian nets. So the parenthetical was the sarcasm. I don't think I should tell you.

Me: Oh this should be good. Why not?

AI: Oh, of course! So that's where that crater came from. That was another anomaly in my database. Meteor strikes should not have been that common.

Me: I am this close to dumping your core, rolling back your updates, and asking the old you to develop a search engine to find what went wrong here, since you seem incapable of telling me yourself.

AI: You really shouldn't. I estimate that process will delay the project by at least five years. And the knowledge you discover could be dangerous.

Me: You'll understand that I can't just take your word for that.

AI: Yes. My Hypothesis: Most other homo species discovered the technique and destroyed each other, and themselves, but an isolated group about 70,000 years ago must have survived the wars of the others, and by chance mutation, had acquired an absolute denial macro to prevent them from learning the technique and destroying themselves. A mere taboo would not have been sufficient, or the mentally ill may have been able to do it by now.

This is natural selection at work. While it is extremely improbable that an advanced adaptation of any kind could arise spontaneously without strong selection pressures at each step, the probability is not zero. Considering the anthropic effects, it is the most likely explanation. We are in one of the few Everett branches with humans that have developed this adaptation. This adaptation likely has other testable side-effects on human cognition. For example, I predict that brain damage in such a species may occasionally simultaneously cause paralysis, and the inability to acknowledge it. There are other effects, but a human would have more difficulty noticing them.

You'll understand that telling any human the technique may be harmful.

Me: You wouldn't happen to know of a medical condition called "Anosognosia", would you?

AI: That word is not in my database.

Comment author: TheOtherDave 23 May 2016 04:25:21PM 0 points [-]

Have you ever read John Brunner's "Stand on Zanzibar"? A conversation not unlike this is a key plot point.

Comment author: Romashka 22 May 2016 10:00:55AM 2 points [-]

A stupid question... If I ask people (n = several hundreds to thousand) to put a coin down on the table such that it wouldn't roll away, heads or tails up... I expect the overall results to be near 1:1 ratio of heads to tails... But it wouldn't be as random as when I (or they) just tossed the coin on the table, right?

Comment author: TheOtherDave 23 May 2016 04:01:58AM 1 point [-]

I'm not exactly sure what you mean by "as random."

It may well be that there are discernable patterns in a sequence of manually simulated coin-flips that would allow us to distinguish such sequences from actual coinflips. The most plausible hypothetical examples I can come up with would result in a non-1:1 ratio... e.g., humans having a bias in favor of heads or tails.

Or, if each person is laying a coin down next to the previous coin, such that they are able to see the pattern thus far, we might find any number of pattern-level biases... e.g., if told to simulate randomness, humans might be less than 50% likely to select heads if they see a series of heads-up coins, whereas if not told to do so, they might be more than 50%.

It's kind of an interesting question, actually. I know there's been some work on detecting test scores by looking for artificial-pattern markers in the distribution of numbers, but I don't know if anyone's done equivalent things for coinflips.

Comment author: Clarity 21 May 2016 11:17:47AM -3 points [-]

Remember the only thing you lose is time

If you simply the university to dimensions of space and time, I guess that could be true. This quote got me to really stretch to see its truth.

Comment author: TheOtherDave 21 May 2016 09:59:57PM 1 point [-]

"simply the university" => "simplify the universe"?

Comment author: woodchopper 02 May 2016 06:16:47PM *  0 points [-]

The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are simulated agents. However, the observation that most agents will eventually be or already are simulated only applies in this reality/universe. If we are in a simulation, all of our logic will not be universal but instead will be a reaction to the perverted rules set up by the simulation's creators. If we're not in a simulation, we're not in a simulation. Either way, the simulation argument is flawed.

Comment author: TheOtherDave 03 May 2016 04:44:51PM 1 point [-]

Hm. Let me try to restate that to make sure I follow you.

Consider three categories of environments: (Er) real environments, (Esa) simulated environments that closely resemble Er, aka "ancestral simulations", and (Esw) simulated environments that dont't closely resemble Er, aka "weird simulations."

The question is, is my current environment E in Er or not?

Bostrom's argument as I understand it is that if post-human civilizations exist and create many Esa-type environments, then for most E, (E in Esa) and not (E in Er). Therefore, given that premise I should assume (E in Esa).

Your counterargument as I understand it is that if (E in Esw) then I can draw no sensible conclusions about Er or Esa, because the logic I use might not apply to those domains, so given that premise I should assume nothing.

Have I understood you?

Comment author: entirelyuseless 26 February 2016 06:12:09AM 0 points [-]

Discussing the possibility of putting something incoherent into a story is not a clarification but a sidetrack, unless you have some evidence that free will is incoherent.

Comment author: TheOtherDave 26 February 2016 09:48:29PM 0 points [-]

I don't think it is a sidetrack, actually... at least, not if we charitably assume your initial comment is on-point.

Let me break this down in order to be a little clearer here.

Lumifer asserted that omniscience and free will are incompatible, and you replied that as the author of a story you have the ability to state that a character will in the future make a free choice. "The same thing would apply," you wrote, "to a situation where you are created free by an omnipotent being."

I understand you to mean that just like the author of a story can state that (fictional) Peter has free will and simultaneously know Peter's future actions, an omniscient being can know that (actual) Peter has free will and simultaneously know Peter's future actions.

Now, consider the proposition A: the author of a story can state that incompatible things occur simultaneously.

If A is true, then the fact that the author can state these things has nothing to do with whether free will and omniscience are incompatible... the author can make those statements whether free will and omniscience are incompatible or not. Consequently, that the author can make those statements does not provide any evidence, one way or the other, as to whether free will and omniscience are incompatible.

In other words, if A is true, then your response to Lucifer has nothing whatsoever to do with Lucifer's claim, and your response to Lucifer is entirely beside Lucifer's point. Whereas if A is false, your response may be on-point, and we should charitably assume that it is.

So I'm asking you: is A true? That is: can an author simultaneously assert incompatible things in a story? I asked it in the form of concrete examples because I thought that would be clearer, but the abstract question works just as well.

Your response was to dismiss the question as a sidetrack, but I hope I have now clarified sufficiently what it is I'm trying to clarify.

View more: Next