Yosarian2

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.

I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.

Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things? I know there are philosophers who think that maybe consciousness is irrelevant to behavior, but that philosophy seems very much at odds with LessWrong-style rationality and the way people on LessWrong tend ot think about and talk about what consciousness is.

He's not a superhuman intelligent paperclipper yet, just human level.

Or, you know, it's just simply true that people experience much more suffering than happiness. Also, they aren't so very aware of this themselves, because of how memories work.

That certanly is not true of me or of my life overall, except during a few short periods. I don't have the same access to other people's internal state, but I doubt it is true of most people.

There certanly are a significant number people who it may be true of, people who suffer from depression or chronic pain or who are living in other difficult circumstances. I highly doubt that that's the majority of people, though.

Yeah, I'm not sure how to answer this. I would do one set of answers for my personal social environment and a completely different set of answers for my work environment, to such a degree that trying to just average them wouldn't work. I could pick one or the other.

Reference: I teach in an urban high school.

I didn't even know that the survey was happening, sorry.

If you do decide to keep running the survey for a little longer, I'd take it, if that data point helps.

I think you need to try and narrow your focus on exactly what you mean by a "futurist institute" and figure out what specifically you plan on doing before you can think about any of these issues.

Are you thinking about the kind of consulting agency that companies get advice from on what the market might look like in 5 years and what technologies their competitors are using? Or about something like a think-tank that does research and writes papers with the intent on influencing political policy, and is usually supported by donations? Or an academic group, probably tied to a university, which publishes academic papers, similar to what Nick Bostrom does at Oxford? Or something that raises money primarily for scientific and technological research? Or maybe an organization similar to H+ that tries to spread awareness of transhumanist/ singularity related issues, publishes newsletters, has meetings, and generally tries to change people's minds about futurist, technological, AI, and/or transhumanist issues? Or something else entierly?

Basically, without more details about exactly what you are trying to do, I don't think anyone here is going to be able to offer very good advice. i suspect you may not be sure yourself yet, so maybe the first step is to try to think about the different options and try to narrow your initial focus a bit.

The best tradeoff is when you are well calibrated, just like with everything else.

"Well calibrated" isn't a simple thing, though. It's always a conscious decision of how willing you are to tolerate false positives vs false negatives.

Anyway, I'm not trying to shoot you down here; I really did like your article, and I think you made a good point. Just saying that it's possible to have a great insight and still overshoot or over-correct for a previous mistake you've made, and if you think that almost everyone you see is suffering, you may be doing just that.

There has to be some kind of trade-off here between false positives and false negatives here, doesn't there? If you decide to "use that skill" to see more suffering, isn't it likely that you are getting at least some false positives, some cases where you think someone is suffering and they aren't?

If "happiness" is too vague a term or has too many other meanings we don't necessarily want to imply, we could just say "positive utility". As in "try to notice when you or the people around you are experiencing positive utility".

I do think that actually taking note of that probably does help you move your happiness baseline; it's basically a rationalist version of "be thankful for the good things in your life". Something as simple as "you know, I enjoy walking the dog on a crisp fall day like this". Noticing when other people seem to be experiencing positive utility is also probably important in becoming a more morally correct utilitarian yourself, likely just as important as noting other people's suffering/ negative utility.

Load More