Comment author: Salemicus 03 June 2015 07:07:45PM 2 points [-]

Excellent post.

Related: It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

Comment author: Lightwave 04 June 2015 08:03:17AM 4 points [-]

It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

I don't see it, how is it useful?

Comment author: sixes_and_sevens 03 August 2014 06:35:17PM 1 point [-]

It'd be UK-specific, but yes I can.

Comment author: Lightwave 24 October 2014 07:38:49AM 0 points [-]

Hey, is there write-up a of the UK-specific stuff for people who weren't able to attend?

Comment author: solipsist 17 July 2014 02:00:08AM 7 points [-]

For example, insisting that destructive uploaders are perfectly okay with no downside to the person stepping inside one. I finally decided to update and rate more likely the possibility that others do not experience consciousness in the same way I do.

I'm inclined to disagree, but you might be one level beyond me. I believe many people empathize with a visceral sense of horror about (say) destructive teleportation, but intellectually come to the conclusion that those anxieties are baseless. These people may argue in a way that appears dense, but they are actually using second-level counterarguments. But perhaps you actually have counter-counter arguments, and I would appear to be dense when discussing those.

Argument in a nutshell:

Sleep might be a Lovecraftian horror. As light in front of you dims, your thoughts become more and more disorganized, and your sense of self fades until the continuation of consciousness that is you ceases to exist. A few hour later someone else wakes up who thinks that they were you. But they are not you. Every night billions of day-old consciousnesses die, replaced the next morning with billion more, deluded by borrowed memories into believing that they will live for more than a few hours. After a you next go to sleep, you will never see colors again.

People who have never slept would be terrified of sleeping. People who have never teleported are terrified of teleporting. The two fears are roughly equal in merit.

Comment author: Lightwave 17 July 2014 08:13:54AM 2 points [-]

Sleep might be a Lovecraftian horror.

Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.

Comment author: James_Miller 14 July 2014 01:47:42AM 4 points [-]

I was thinking more "What is the error rate in replication experiments when we know the results from the original experiment were correct?" So if mixing X and Y under certain conditions has to yield Z, how often when scientists actually try to do this do they get Z?

Comment author: Lightwave 14 July 2014 09:20:39AM *  4 points [-]

The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine - it's taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?

Comment author: Lightwave 24 December 2013 11:52:12AM 10 points [-]

The trailer for the movie Transcendence is out.

Comment author: TheOtherDave 27 September 2013 02:01:49PM 0 points [-]

Fair enough.

Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there's a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered.

And you're right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves... that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we're not obligated to do this by any means, but internal consistency is a common thing that people value, so it's not surprising that we want to do it. So, sure... if what's going on here is that byrnema has inconsistent values which can be factored along a "privilege my own identity"/"don't privilege my own identity" axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed.

With respect to my statement being confusing... I think you understood it perfectly, you were just disagreeing -- and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. "Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes."

Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don't seem to have a problem with it.

I find I prefer being the parts of my mind that align with the former; we get along better.

Comment author: Lightwave 30 September 2013 07:28:01AM 0 points [-]

to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency

As humans we can't change/modify ourselves too much anyway, but what about if we're able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there's the argument that it leaves you open for dutch booking / blackmail.

Comment author: lmm 27 September 2013 06:30:23PM 1 point [-]

But people's values changes over time, and that's a good thing. For example in medieval/ancient times people didn't value animals' lives and well-being (as much) as we do today. If a medieval person tells you "well we value what we value, I don't value animals, what more is there to say?", would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals' well-being?

Is that an actual change in values? Or is it merely a change of facts - much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?

Comment author: Lightwave 27 September 2013 11:06:12PM *  0 points [-]

Well whether it's a "real" change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?

Comment author: TheOtherDave 26 September 2013 07:41:57PM 1 point [-]

Can you say more about why "it's just a fact that I care" is not satisfying? Because from my perspective that's the proper resolution... we value what we value, we don't value what we don't value, what more is there to say?

Comment author: Lightwave 27 September 2013 07:56:07AM *  0 points [-]

we value what we value, we don't value what we don't value, what more is there to say?

I'm confused what you mean by this. If there wasn't anything more to say, then nobody would/should ever change what they value? But people's values changes over time, and that's a good thing. For example in medieval/ancient times people didn't value animals' lives and well-being (as much) as we do today. If a medieval person tells you "well we value what we value, I don't value animals, what more is there to say?", would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals' well-being?

You are of course using some of your values to instruct other values. But they need to be at least consistent and it's not really clear which are the "more-terminal" ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I'm sympathetic to this view.

Comment author: James_Miller 31 July 2013 10:02:28PM 1 point [-]

In regards to sound: If you take a tuning fork and smack it, it will vibrate. Vibration can be pleasurable. If the tuning fork is a brain, and the smack is music, then the result is a contented or slightly altered-from-the-norm feeling, that might be akin to the vibration of a tuning fork if tuning forks like vibrating.

This makes it seem like wireheading.

Comment author: Lightwave 01 August 2013 07:44:45AM 0 points [-]

By the same logic eating you favorite food because it tastes good is also wireheading.

Comment author: RichardKennaway 18 July 2013 09:38:04AM 0 points [-]

If learning a piece of knowledge will hurt you (emotionally, or be bad for your mental health) then it might be bad, instrumentally, to learn it.

Better, instrumentally, to learn to handle the truth. Ignorance and dullness are not qualities to be cultivated, however fortuitously useful on occasion it might be to not know something, or be unable to notice an implication of what you do know.

But if Epistemic Rationality didn't help me be instrumentally rational

If it doesn't, you're doing it wrong. This is the entire point of LessWrong.

Comment author: Lightwave 18 July 2013 11:18:49AM *  0 points [-]

Better, instrumentally, to learn to handle the truth.

It really depends on your goals/goal system. I think the wiki definition is supposed to encompass possible non-human minds that may have some uncommon goals/drives, like a wireheaded clippy that produces virtual paperclips and doesn't care whether they are in the real or virtual world, so it doesn't want/need to distinguish between them.

View more: Prev | Next