If you have some sort of decision-making process you do a lot that you expect is going to become a thing you build intuition around later, make sure you have the right feedback loops in place, so that you have something to help keep that intuition calibrated. (This also applies to processes you engineer for others.)
I'm kind of curious; what do you think CFAR's objective is 5 years from now (assuming they get the data they want and it strongly supports the value of the workshops)?
You might check IRC - #lesswrong, maybe #slatestarcodex, someone is probably willing to help, and you might make a friend.
Out of curiosity, thoughts on the Againstness class?
I REALLY like this question, because I don't know how to approach it, and that's where learning happens.
So it's definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It's kind of like asking if you'd prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions.
I think it'd be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow ...
So, there's a heuristic that I think is a decent one, which is that less-conscious things have less potential suffering. I feel that if you had a suffer-o-meter and strapped it to the heads of paramecia, ants, centipedes, birds, mice, and people, they'd probably rank in approximately that order. I have some uncertainty in there, and I could be swayed to a different belief with evidence or an angle I had failed to consider, but I have a hard time imagining what those might be.
I think I buy into the notion that most-conscious doesn't strictly mean most-suff...
Well, how comparable are they, in your view?
Like, if you'd kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they're just as effective), then you clearly value cow-suffering to some degree.
The reason I went basicall...
You can always shoot someone an email and ask about the financial aid thing, and plan a trip stateside around a workshop if, with financial aid, it looks doable, and if after talking to someone, it looks like the workshop would predictably have enough value that you should do it now rather than when you have more time and money.
Noticing confusion is the first skill I tried to train up last year, and is definitely a big one, because knowing what your models predict and noticing when they fail is a very valuable feedback loop that prevents you from learning if you can't even notice it.
Picturing what sort of evidence would unconvince you of something you actively believe is a good exercise to pair with the exercise of picturing what sort of evidence would convince you of something that seems super unlikely. Noticing unfairness there is a big one.
Realizing when you are trying to "win" at truthfinding, which is... ugh.
Not feeling connected with people, or, increasingly feeling less connection with people.
I actively socialize myself, and this helps, but the other thing maybe suggests to me I'm doing something wrong.
(Edit: to clarify, my empathy thingy works as well (maybe better) than it ever has, I just feel like the things I crave from social interactions are getting harder to acquire. Like, people "getting" you, or having enough things in common that you can effectively talk about the stuff that interests you. So, like, obviously, one of the solutions there is to hang out with more bright-and-happy CFAR-ish/LW-ish/EA-ish people.)
Hey, does anyone else struggle with feelings of loneliness?
What strategies have you found for either dealing with the negative feelings, or addressing the cause of loneliness, and have they worked?
Do you feel lonely because you spent your time alone or because you will you don't connect with the people with whom you spend your time?
Two separate problems.
By far the best definition I've ever heard of the supernatural is Richard Carrier's: A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities." (http://lesswrong.com/lw/tv/excluding_the_supernatural/)
I have made a prosecutor pale in the face by suggesting that courthouses should be places where people with plea bargains shop their offers around with each other so that they know what's a good deal and a bad deal.
I don't think it's going to matter very much. 3 digits after the dot, with the understanding that the third digit is probably not very good, but the second probably is pretty good.
Suppose the actual length of a person's index finger is 80.5 mm and the actual length of his/her ring finger is 83.5 mm. Then the 2D:4D ratio is 0.964. A measurement error of 0.5 mm is very easy to make, e.g. due to inaccuracy of a photocopier, inaccuracy of a ruler, inexactness of where a finger joins the hand (and even if it wasn't a vague concept it would still be a problem to pinpoint the precise location of it with a great accuracy) and even differences in muscle tension in fingers at the particular moment of placing a hand in a photocopier. If a pers...
Faith in Humanity moment: LW will not submit garbage poll responses using other LW-users as public keys.
I definitely don't have a strong identity in this sense; like, I suspect I'd be pretty okay if an alien teenager swooped by and pushed the "swap sex!" button on me, and the result was substantially functional and not horrible to the eye. Like, obviously I'd be upset about having been abused by an outside force, but I don't think the result itself is inherently distasteful or anything like that.
I'm really curious to see how this and related stuff (male/female traits, fingers) relate.
Definitely had a thought on this order; I went with "don't die at any point and still reach age 1000", though I also don't really consider solutions that involve abandoning bodies counting.
At the very least, I suspect one of the analyses will be 'bucketize corresponding to certainty, then plot "what % of responses in bucket were right?"' - something that was done last year (see 2013 LessWrong Survey Results)
Last year it was broken down into "elite" and "typical" LW-er groups, which presumably would tell you if hanging out here made you better at overconfidence, or something similar in that general vicinity.
Survey complete!
I'm kind of surprised at how much better I feel like I've gotten about reasoning about these really fuzzy estimates. One of my big goals last year was "get better at reasoning about really fuzzy things" and I feel like I've actually made big progress on that?
I'm really excited to see what the survey results look like this year. I'm hoping we've gotten better at overconfidence!
The gender default thing took me by surprise. I'm guessing that a lot of people answer yes to having a strong gender identity?
Hey, I just wanted to chime in here. I found the moral argument against eating animals compelling for years but lived fairly happily in conflict with my intuitions there. I was literally saying, "I find the moral argument for vegetarianism compelling" while eating a burger, and feeling only slightly awkward doing so.
It is in fact possible (possibly common) for people to 'reason backward' from behavior (eat meat) to values ("I don't mind large groups of animals dying"). I think that particular example CAN be consistent with your moral function (if you really don't care about non-human animals very much at all) - but by no means is that guaranteed.
Yeah, what I didn't say is, "If I became psychotic, and had a hallucination of god, I would probably not long-term believe it." There are other reasons people can arrive at a state where they have hallucinations. If you break my critical faculties, then I'm far less likely to reason well.
I was able to find numbers suggesting that perhaps 1/4 people with schizophrenia have religious hallucinations, but I was unable to find out what percentage of people who report religious hallucinations serious suffer psychotic disorders. I do know that religio...
Yes, I agree with everything you say (- well, I don't know the M-H algorithm, but I'll take that on faith).
I mentioned this explicitly because it's mindblowingly bad to see someone saying this, with this background, when he says so many other smart things that clearly imply he understands the general principle of local optimizations not being global optimizations.
What he didn't say is, "This enzyme works really well, and we can be pretty confident evolution has tried out most of the easy modifications on the current structure. It's not perfect (admitt...
I would assume the same, but unfortunately... that's a real life thing that I heard one say in a lecture. Well, not "Global maximum!" but something with essentially identical meaning, without the subtext of big error.
People may be aware of a lesson learned from math, but not propagate it through all their belief systems.
It's Harry talking about Blame, chapter 90. (It's not very spoily, but I don't know how the spoiler syntax works and failed after trying for a few minutes)
..."That's not how responsibility works, Professor." Harry's voice was patient, like he was explaining things to a child who was certain not to understand. He wasn't looking at her anymore, just staring off at the wall to her right side. "When you do a fault analysis, there's no point in assigning fault to a part of the system you can't change afterward, it's like stepping off a cliff and b
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.
That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it's working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won't come at the cost of smashing your instincts to fix the world.
If I had a REAL discussion with Actual God, he might just rewire me because I had a bug, and he's a cool guy.
Alternatively, I might ask God for evidence that he's God, or at least an awesome alien teenager with big angelic powers, and get some predictions and stuff out of him that I can use to verify that something incredible is in fact happening, because, hey, I'm human, and humans occasionally hallucinate, and I would probably like to make sound arguments that I really did have a discussion with a guy with big angelic powers that I could share with other...
I would like to subscribe to your newsletter!
I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.
Reminds me of the time the Texas state legislature forgot that 'similar to' and 'identical to' are reflexive.
I'm somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.
Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of "We are the last two people alive, and you're dying right in front of me, and I could help you, but I'm not going to." when deciding to buy shoes or not, when there are 7 billion of us, and you're actually dying over there, and someone closer to you is not helping you.
I should have clarified that I meant that in terms of having persuasive power over others.
Personal experience can be personally compelling, but people have pretty well exploited the "I personally experienced X, are you calling me a liar?" thing enough (also, hallucinations, confusion, unreliable memory, etc), that people generally take statements about personal experience of others with a grain of salt.
If I hallucinated a discussion with God, I would probably not be long-term convinced of it, despite the experience.
(Edit: Aside: why did I add anything past the first sentence? There was no reason to.)
Random thought.
So, minor changes in designs of things sometimes result in better versions of things. You then build those things, and make minor design changes there. Repeat. Eventually you often get a version of a thing that no longer sees improvement from minor change.
"Global maximum!" declares a PhD biologist at a good university.
How common is this defect?
Small effect sizes are easier to hallucinate into being real.
I recently listened to a defense attorney making claims about how personal experience and eye-witness testimony are the best kind of evidence you can possibly have, in defense of Christianity.
If your religion dictates that you must believe in miracles being real, you will have to break yourself in colorful ways in order to do so.
I'm partial to the reference class, "theories that make lots of excuses for why it's hard to confirm or reject when it should be very easy, but nonetheless an ape-like creature ran into it one day."
"Ought" meaning that I think it's highly unlikely that these calculations come out in such a close race that I don't have clear choices despite using low powered analysis. It might, but like, if the mussels thing looks very likely true, that for example would be a big differentiator over certain other products. Also, there is some variation between brain size and food value. If something IS a close call, there are lots of things that almost certainly are not. That's what I mean by "ought"
One person does not have a large effect on mark...
Yeah, I see a lot of complications involving iron, b12, and a few other things.
I don't have some sort of moral absolute thing going on; I ought to be able to make a low-effort glance into the things I eat and pick a diet that closely matches my intuitions without sacrificing health, happiness, or undue money. Like if it turns out that beef is the most ethical meat, and that eggs are really horrible, then I might eat beef but not eggs, if they are just vastly better ways of getting things that are otherwise a complete PITA to acquire.
Most likely, though, I can get by with very minimal tradeoffs, or at least it looks that way.
Oh, wow, that's where this uniform protest against making guesses about mental states comes from? It's actually written into their ethical guidelines?
I don't understand this. Is there some obvious or non-obvious reason for psychiatrists not to guess at mental states out loud, beyond the obvious one where people might listen to your opinions?
I don't get it.
Well, not defining on my own. I'm deliberately asking a community of people who try to think about these sorts of things in clearer terms than normal about what sorts of considerations might be worth examining. Making a perfect objective suffering function doesn't seem hugely worthwhile for me; I just want to be able to make orders of magnitude comparisons because that's likely enough. [ed: on my necessarily messed up strange subjective human scale]
My core assumption is basically that some animals with brains have some degree of conscious experience, and ...
Haha. Nice.
I meant more along the lines of I don't have some coherent framework to slam this stuff into, but I want to be able to locally do some very ballpark comparison on things that I currently know too little about. Sin-on is the wrong word (it doesn't reflect very well what it's trying to represent), but it seemed amusing, so for the moment sin-on it is.
I think salmon is meat, but it might be one of the less bad things (don't know), and this is something I'll deliberately examine if it solves some health stuff.
Yeah. A bit tongue in cheek, utility is to utilon as sin is to sin-on.
It's like a very immature concept in my head and I'm still trying to map out what's hiding in there, but it seems useful to me at the moment to figure out what a sin-on is made of and figure out order-of-magnitude type detail about things, as a way of trying to make reasonably consistent choices.
Very much agree. The altruistic version of being a vegetarian warrior maybe looks like developing some fiendish scheme to make meat unpalatable to humans on a large scale. My reason for change is basically just that I recognized this conflict between my thinking and my behavior and it looked fairly, like, hypocritical to me.
Thanks for the helpful links!
Retracted on the basis that I had not read the original thread and I almost certainly misunderstood the underlying question.
Well, your CI does change for the coin, if you observe strange artifacts of construction, or if the tosser has read Jaynes (who describes a way to cheat at coin tossing), or if the coin shows significant bias after lots of tries.
If you doubt this last bit, try a calibration app and look at one of your estimation buckets and ask yourself the same question: is my 70% bucket miscallibrated, or is this an effect of Tyche?
Your example constrains the evidence on the coin, by the convention that is attached to coin metaphors.
The less crappy response is that I lik...
I've recently reconciled my behavior with my ethical intuition regarding eating animals, by way of deciding to alter my behavior and do some variation of "don't eat meat". I decided on this question long ago but did not act upon it.
I notice that there is very confusing information out there about what one should eat in order to avoid negative health impacts, and would like to read correct and useful articles on the subject, because I strongly desire to not be unhealthy. Do you have suggestions?
I am pragmatic. My intuition says that bone ash used ...
Care to share some hypothetical examples of irritating uses of the B word?
So, specifically with respect to "cult' and "elitist" observations I see, in general, I would like to offer a single observation:
"Tsuyoku naritai" isn't the motto of someone trying to conform to some sort of weird group norm. It's not the motto of someone who hates people who have put in less time or effort than himself. It's the recognition that it is possible to improve, and the estimation that improving is a worthwhile investment.
If your motivation for putting intellectual horsepower into this site isn't that, I'd love to hear ...
Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
Art in the other sense of the word. Think more along the lines of skills and practices.
I think "art" here is mainly intended to call attention to the fact that practical rationality's not a collection of facts or techniques but something that has to be drilled in through deliberate long-term practice: otherwise we'd end up with a lot of people that can quote the definitions of every cognitive bias in the literature and some we invented, but can't actually recognize when they show up in their lives. (YMMV on whether or not we've succeeded in that respect.)
Some of the early posts during the Overcoming Bias era talk about rationality...
Ahh, that makes more sense.
My guess is that the site is "probably helping people who are trying to improve", because I would expect some of the materials here to help. I have certainly found a number of materials useful.
But a personal judgement probably helping" isn't the kind of thing you'd want. It'd be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
So I had one of those typical mind fallacy things explode on me recently, and it's caused me to re-evaluate a whole lot of stuff.
Is there a list of high-impact questions people tend to fail to ask about themselves somewhere?