Wiki Contributions

Comments

I think that people making more top level posts makes the community better off. I think that a new post that someone has put work into tends to be a lot better content overall than the top comment that might just be stating what everyone else's immediate thought about this was. Top level posts are also important for generating discussion and can be valuable even if they are wrong for that reason (though obvious they are better if they are right).

I've noticed that for many LW posts - and EA posts to a lesser extent - it's very common for a comment on them to get more upvotes than the post itself. Since it's usually a lot harder to write a post than to comment on it, it seems like this isn't incentivising people to post strongly enough.

This also seems to apply to facebook posts and likes in the LW and EA groups.

I'm a 97% introvert male 22 year old. I've lived with a number of different roommates at work sites and I am living with my parents right now. Living alone would have been preferable in every case. I might enjoy living in an EA or rationalist household, though.

That's a very good point about radiology being replaceable.

Hmm, would you say there is still less social interaction in surgery than most other specialties?

Many surgeries are quite long, and require you to be standing hours at a time not necessarily in a very comfortable position.

I can take physical discomfort,

To become good, you'll have to specialize, so you'll be doing the same procedures over and over again.

Yeah, I guess doing the same procedure over and over again might not be super interesting or educational.

That's a useful datapoint, thanks.

I social skills tend to improve over time and having good social skills makes social interaction more fun.

I'm the guy eggman is referring to :) Thanks for all the info!

No I do not like working with people. I would aim for surgery or radiology for this reason. I currently do not perform well under social pressure but my anxiety should diminish with time. Yes, I think I am good at explaining things in simple terms. I prefer less social interaction. I could tolerate a strict hierarchy. I don't handle sleep deprivation well. I do not handle uncertainty particularly well. Yes, I think I could handle accidents better than most people.

Med school isn't generally about that. Would it be agony for him to memorize loads of facts without questioning/understanding them too much, then forget them because he doesn't need them for anything? Also, much of the stuff you have to memorize after the first 1-2 years has nothing to do with human biology. There are some challenging moments with complicated patients, but the work is mostly quite simple and algorithmic.

That's bad news but not a deal breaker.

For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions,

Most people are at an extremely low risk of actually getting tortured so looking at their revealed preferences for it would be hard. The odd attitudes people have to low risk high impact events would also confound that analysis.

It seems like a good portion of the long term plans of people are also the things that make them happy. The way I think about this is asking whether I would still want to want to do something if It would not satisfy my wanting or liking systems when I performed it. The answer is usually no.

and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character.

I'm not quite sure what you mean here.

Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.

It seems to me that there would be problems like the voter paradox for CEV. And so the process would involve judgement calls and I am not sure if I would likely agree with the judgement calls someone else made for me if that was how CEV was to work. Being given super human intelligence to help me decide my values would be great, though.

I have also have some of the other problems CEV that are discussed in this thread: http://lesswrong.com/lw/gh4/cev_a_utilitarian_critique/

Hmm, well we could just differ in fundamental values. It does seem strange to me that based on the behavior of most people in their everyday lives that they wouldn't value experiential things very highly. And that if they did their values of what to do with the universe would share this focal point.

I'll share the intuitions pumps and thought experiments that lead to my values because should make them seem less alien.

So when I reflect on my what are my strongest self regarding values, it's pretty clear to me that "not getting tortured" is at the top of my preferences. I have other values and I seem to value some non-experiential values such as truth and wanting to remain the same sort of person as I currently am, but really these just pale in comparison to my preference for not_torture. I don't think that most people on LW imagine torture really consider torture when they reflect on what they value.

I also really strongly value peak hedonic experiences that I have had, but I haven't experienced any that have an intensity that could compare directly to what I can imagine real torture would be like so I use torture as an example instead. The strongest hedonic experiences I have had are nights where I successfully met interesting, hot women and had sex with them. I would certainly trade a number of these nights for a nigh of real torture and so they be described on the same scale.

My other regarding desires are straightforwardly about the well being of other beings and I would want to do this in the same way that I would want to satisfy myself if I were to have the same desires as they have. So if they have desires A, B & C, I would want the same thing to happen for them as I would want for myself if I had the same exact same set of desires.

Trying to maximize things other than happiness and suffering involves trading off against these two things and it just does seem worth it to do that. The action that maximizes hedons is also the action that the most beings care the most about happening and it feels kind of arbitrary and selfish to do something else instead.

I accept these intuition pumps and that leads me to hedonium. If it's unclear how exactly this follows, I can elaborate.

This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.

Yeah, I think that making more philosophical or conceptual progress was higher value relative to cost than doing more experimental work.

Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?

The question seems like it probably come down to 'how similar is the algorithm this thing is running to the algorithms can cause happiness in humans (and I'm very sure some other animals as well). And if running the exact same algorithm in a human would produce happiness and that person could tell us so, that would be pretty conclusive.

If Omega was concerned about this sort of thing (and didn't know it already it already) it could test exactly what conditions changes in physical conditions led to changes or lapses in its own consciousness and find out that way. That seems like potentially a near solution to the hard problem of consciousness that I think you are talking about.

What kind of scientific progress are you envisioning, that would eventually tell us how much hedonic value a given collection of atoms represents? Generally scientific theories can be experimentally tested, but I can't see how one could experimentally test whether such a hedonic value theory is correct or not.

You apply your moral sentiments to the facts to determine what to do. As you suggest, you don't look for them in other objects. I wouldn't be testing my moral sentiments per say, but what I want to do with the world depends on how exactly it works and testing that so I can best achieve this would be great

Figuring out more about what can suffer what can feel happiness would be necessary and some other questions would be useful to answer.

Moral realism vs. non realism can be a long debate but hopefully this will at least tell you where we are coming from even if you disagree.

Load More