Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : Vancouver rationality meetup: cognitive bias discussion

0 Jabberslythe 30 May 2017 08:59PM

Discussion article for the meetup : Vancouver rationality meetup: cognitive bias discussion

WHEN: 08 July 2017 07:00:00PM (-0700)

WHERE: 6939 Frederick Ave., Burnaby, BC, Canada

Discussion about cognitive biases and heuristics. See the link below for further information. 7 PM https://www.facebook.com/events/452883668399372/

Discussion article for the meetup : Vancouver rationality meetup: cognitive bias discussion

Comment author: Lumifer 10 December 2014 07:29:14PM *  5 points [-]

it seems like this isn't incentivising people to post strongly enough.

You don't want to incentivise people to make top-level posts, you want to incentivise them to contribute excellent content, and it doesn't matter much if it's in the top-level post or the comments.

The guy who thought the value of the product must reflect the labour that went into it was Karl Marx. He was wrong.

Comment author: Jabberslythe 11 December 2014 05:23:10PM 2 points [-]

I think that people making more top level posts makes the community better off. I think that a new post that someone has put work into tends to be a lot better content overall than the top comment that might just be stating what everyone else's immediate thought about this was. Top level posts are also important for generating discussion and can be valuable even if they are wrong for that reason (though obvious they are better if they are right).

Comment author: Jabberslythe 10 December 2014 06:45:46PM 1 point [-]

I've noticed that for many LW posts - and EA posts to a lesser extent - it's very common for a comment on them to get more upvotes than the post itself. Since it's usually a lot harder to write a post than to comment on it, it seems like this isn't incentivising people to post strongly enough.

This also seems to apply to facebook posts and likes in the LW and EA groups.

Comment author: Jabberslythe 20 November 2013 07:55:16PM 1 point [-]

I'm a 97% introvert male 22 year old. I've lived with a number of different roommates at work sites and I am living with my parents right now. Living alone would have been preferable in every case. I might enjoy living in an EA or rationalist household, though.

Comment author: hyporational 30 September 2013 02:46:21AM *  3 points [-]

Surgery: you'll be working with at least 3-5 people in the operating room. You'll also have to examine lots of patients to determine whether they need surgery or not and alleviate their fears concerning the operations. Lots of treatments are becoming more conservative, i.e. no surgery at all. After surgery, you'll have to examine patients to assess their recovery and motivate them. Obviously there are surgeons who chose the specialty because they don't like working with people, and I think that's unfortunate, because they're doing a half assed job. Many surgeries are quite long, and require you to be standing hours at a time not necessarily in a very comfortable position. To become good, you'll have to specialize, so you'll be doing the same procedures over and over again.

Radiology: have you considered that this kind of image recognition is a low hanging fruit for a narrow A.I.? I considered radiology too, but this is one of the main reasons I won't anymore. Also, radiology is easy to do via the internet, so that might lower the earning potential if hospitals hire radiologists overseas. If you're doing ultrasounds or become an intervention radiologist, there will be a lot of patient interaction, if you're a neuroradiologist, pretty much none at all.

Pathology or forensic pathology: you'll usually work alone or with 1-2 other people. You'll have to explain the findings to other doctors and the patients relatives though. The earning potential isn't probably that high, in forensic pathology it might be, I don't know.

Comment author: Jabberslythe 30 September 2013 10:26:21PM 0 points [-]

That's a very good point about radiology being replaceable.

Hmm, would you say there is still less social interaction in surgery than most other specialties?

Many surgeries are quite long, and require you to be standing hours at a time not necessarily in a very comfortable position.

I can take physical discomfort,

To become good, you'll have to specialize, so you'll be doing the same procedures over and over again.

Yeah, I guess doing the same procedure over and over again might not be super interesting or educational.

Comment author: hyporational 30 September 2013 03:21:29AM *  2 points [-]

A data point: I didn't like working with people, and still stupidly applied to med school thinking it didn't matter. Now I'm a doctor, and patient interaction is one of the things I enjoy the most. I think it's because my social skills improved. Go figure...

I still wouldn't recommend applying to med school to improve your social skills, though.

Comment author: Jabberslythe 30 September 2013 10:10:24PM 1 point [-]

That's a useful datapoint, thanks.

I social skills tend to improve over time and having good social skills makes social interaction more fun.

Comment author: hyporational 27 September 2013 11:34:45AM *  5 points [-]

There are a couple of important questions you didn't raise.

Does he like working with people? Does he perform well under social pressure? Is he good at explaining things in simple terms? Would he like doing so every day? Would he like working in a team and maybe leading it? Would the strict social hierarchy of hospitals bother him? How does he handle sleep deprivation? Can he cope with constant uncertainty about his decisions? Can he handle killing/injuring people by accident?

he is very passionate biology and the study of life

Med school isn't generally about that. Would it be agony for him to memorize loads of facts without questioning/understanding them too much, then forget them because he doesn't need them for anything? Also, much of the stuff you have to memorize after the first 1-2 years has nothing to do with human biology. There are some challenging moments with complicated patients, but the work is mostly quite simple and algorithmic.

Comment author: Jabberslythe 29 September 2013 10:19:02PM 6 points [-]

I'm the guy eggman is referring to :) Thanks for all the info!

No I do not like working with people. I would aim for surgery or radiology for this reason. I currently do not perform well under social pressure but my anxiety should diminish with time. Yes, I think I am good at explaining things in simple terms. I prefer less social interaction. I could tolerate a strict hierarchy. I don't handle sleep deprivation well. I do not handle uncertainty particularly well. Yes, I think I could handle accidents better than most people.

Med school isn't generally about that. Would it be agony for him to memorize loads of facts without questioning/understanding them too much, then forget them because he doesn't need them for anything? Also, much of the stuff you have to memorize after the first 1-2 years has nothing to do with human biology. There are some challenging moments with complicated patients, but the work is mostly quite simple and algorithmic.

That's bad news but not a deal breaker.

In response to comment by Jabberslythe on .
Comment author: Armok_GoB 31 August 2013 01:27:22AM 1 point [-]

That might be the case. Lets do the same "observe current everyday behaviours" thing for me:

For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions, and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character. An alien observing me might conclude I mostly value memorizing internet culture, and my explicit goals are mostly making a specific piece of information (often a work of art fitting some concept) exist on the internet not caring much abut if it was "me" that put it there. Quite literally a meme machine.

I don't think these types of analysis (intuition pump is the wrong word I think) are very useful thou. Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.

In response to comment by Armok_GoB on .
Comment author: Jabberslythe 31 August 2013 05:22:54PM 2 points [-]

For reasons I'd rather not get into it's been repeatedly shown my revealed preference for torture is not much less than other kinds of time consuming distractions,

Most people are at an extremely low risk of actually getting tortured so looking at their revealed preferences for it would be hard. The odd attitudes people have to low risk high impact events would also confound that analysis.

It seems like a good portion of the long term plans of people are also the things that make them happy. The way I think about this is asking whether I would still want to want to do something if It would not satisfy my wanting or liking systems when I performed it. The answer is usually no.

and I can't recall ever having a strong hedonic experience "as myself" rather than immersed in some fictional character.

I'm not quite sure what you mean here.

Our preferences after endless iterations of self improvement and extrapolation probably is entirely uncorrelated with what they appear to be as current humans.

It seems to me that there would be problems like the voter paradox for CEV. And so the process would involve judgement calls and I am not sure if I would likely agree with the judgement calls someone else made for me if that was how CEV was to work. Being given super human intelligence to help me decide my values would be great, though.

I have also have some of the other problems CEV that are discussed in this thread: http://lesswrong.com/lw/gh4/cev_a_utilitarian_critique/

In response to .
Comment author: Armok_GoB 30 August 2013 09:43:05PM *  2 points [-]

Problem: I don't value happiness or pleasure, basically at all. I don't know exactly what my extrapolated volition, but it seems safe to say it's some form of network with uniqueness requirements on nodes and interconnectivity requirements and goes up somewhere between quadratically and exponentially with it's size, and that an FAI is a strict requirement not just a helpful bonus. So any kind of straightforward tiling will be suboptimal, since the best thing to do locally depends on the total amount of resources available, and if a to similar already exists somewhere else in the network. And of top of that you have to keep latency between all parts of the system low.

tl;dr: my utility function probably looks more like "the size of the internet" than "the amount of happiness in the universe".

In response to comment by Armok_GoB on .
Comment author: Jabberslythe 31 August 2013 12:48:55AM 2 points [-]

Hmm, well we could just differ in fundamental values. It does seem strange to me that based on the behavior of most people in their everyday lives that they wouldn't value experiential things very highly. And that if they did their values of what to do with the universe would share this focal point.

I'll share the intuitions pumps and thought experiments that lead to my values because should make them seem less alien.

So when I reflect on my what are my strongest self regarding values, it's pretty clear to me that "not getting tortured" is at the top of my preferences. I have other values and I seem to value some non-experiential values such as truth and wanting to remain the same sort of person as I currently am, but really these just pale in comparison to my preference for not_torture. I don't think that most people on LW imagine torture really consider torture when they reflect on what they value.

I also really strongly value peak hedonic experiences that I have had, but I haven't experienced any that have an intensity that could compare directly to what I can imagine real torture would be like so I use torture as an example instead. The strongest hedonic experiences I have had are nights where I successfully met interesting, hot women and had sex with them. I would certainly trade a number of these nights for a nigh of real torture and so they be described on the same scale.

My other regarding desires are straightforwardly about the well being of other beings and I would want to do this in the same way that I would want to satisfy myself if I were to have the same desires as they have. So if they have desires A, B & C, I would want the same thing to happen for them as I would want for myself if I had the same exact same set of desires.

Trying to maximize things other than happiness and suffering involves trading off against these two things and it just does seem worth it to do that. The action that maximizes hedons is also the action that the most beings care the most about happening and it feels kind of arbitrary and selfish to do something else instead.

I accept these intuition pumps and that leads me to hedonium. If it's unclear how exactly this follows, I can elaborate.

In response to comment by cousin_it on .
Comment author: Wei_Dai 29 August 2013 05:56:17PM 1 point [-]

It seems to me that belief in being conscious is causally linked to being conscious

This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.

or at least that's true for humans and would need a special reason to be false for uploads.

If you (meaning someone interested in promoting Hedonium) are less sure that it's true for non-humans, then I would ask a slightly different question that makes the same point. Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?

In response to comment by Wei_Dai on .
Comment author: Jabberslythe 29 August 2013 08:14:55PM 1 point [-]

This may well be true but I don't see how you can be very certain about it in our current state of knowledge. Reducing this uncertainty seems to require philosophical progress rather than scientific progress.

Yeah, I think that making more philosophical or conceptual progress was higher value relative to cost than doing more experimental work.

Suppose I told you that your specially designed collection of atoms optimized for hedon production can't feel happiness because it's not conscious. Can you conduct an experiment to disprove this?

The question seems like it probably come down to 'how similar is the algorithm this thing is running to the algorithms can cause happiness in humans (and I'm very sure some other animals as well). And if running the exact same algorithm in a human would produce happiness and that person could tell us so, that would be pretty conclusive.

If Omega was concerned about this sort of thing (and didn't know it already it already) it could test exactly what conditions changes in physical conditions led to changes or lapses in its own consciousness and find out that way. That seems like potentially a near solution to the hard problem of consciousness that I think you are talking about.

View more: Next