Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by jbash on Tell Culture
Comment author: KnaveOfAllTrades 19 January 2014 02:13:09PM *  10 points [-]

I am really very pleasantly surprised with how this comment tree turned out and these are useful warnings. The level of internal insight was higher than I would have expected even if our first two comments hadn't been vaguely confrontational. Thank you!

Comment author: Dre 22 January 2014 05:14:36AM 6 points [-]

I'm coming to this party rather late, but I'd like to acknowledge that I appreciated this exchange more than just by upvoting it. Seeing in depth explanations of other people's emotions seems like the only way to counter Typical Mind Fallacy, but is also really hard to come by. So thanks for a very levelheaded discussion.

Comment author: Dre 09 September 2013 04:52:37AM 0 points [-]

Going off of what others have said, I'll add another reason people might satisfice with teachers.

In my experience, people agree much more about which teachers are bad than about which are good. Many of my favorite (in the sense that I learned a lot easily) teachers were disliked by other people, but almost all of those I thought were bad were widely thought of as bad. If you're not as interested in serious learning this might be less important.

So avoiding bad teachers requires a relatively small amount of information, but finding a teacher that is not just good, but good for you requires a much larger amount. So people reasonably only do the first part.

Comment author: Dre 14 August 2013 05:54:14PM 8 points [-]

I thought this was an interesting critical take. Portions are certainly mind-killing, eg you can completely ignore everything he says about rich entrepreneurs, but overall it seemed sound. Especially the proving-too-much argument; the projections involve doing multiple revolutionary things, each of which would be a significant breakthroughs on its own. The fact that Musk isn't putting money into doing any of those suggests it would not be as easy/cheap as predicted (not just in a "add a factor of 5" way, but in a "the current predictions are meaningless" way).

Also, the fact he's proposing it for California seems strange. There are places with cheaper, flatter land where you could do a proof of concept before moving into a politically complicated, expensive, earthquake-prone state like California. I've seen Texas (Houston-Dallas-San Antonio) and Alberta (Edmonton-Calgary) proposed, both of which sound like much better locations.

Comment author: NancyLebovitz 05 August 2013 02:14:31PM 8 points [-]

The variables that had high information values were routinely those that the client had never measured… * The variables that clients [spent] the most time measuring were usually those with a very low (even zero) information value…

This seems very unlikely to be a coincidence. Any theories about what's going on?

Comment author: Dre 09 August 2013 02:26:14AM 1 point [-]

If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you've already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.

Further, if they hadn't even thought of some measurements they couldn't have pursued them, so they wouldn't have suffered any declining returns.

I don't think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.

Comment author: Rukifellth 31 July 2013 10:48:39PM 0 points [-]

How do I convince someone that Open Individualism is false?

Comment author: Dre 01 August 2013 04:26:13AM 2 points [-]

I don't know if this is exactly what you're looking for, but the only way I've found to make philosophy of identity meaningful is to interpret it as about values. In this reading questions of personal identity are what you do/should value as "yourself".

Clearly you-in-this-moment is yourself. Do you value you-in-ten-minutes the same as yourself-now? ten years? simulations?, etc. Then Open Individualism (based on my cursory googling) would say we should value everyone (at all times?) identically as ourselves. Then it's clearly descriptively false, and, at least to me, seems highly unlikely to be any sort of "true values", so it's false.

Comment author: rocurley 08 July 2013 09:43:55PM 1 point [-]

Well, several of the universal constants arguably define our units. For every base type of physical quantity (things like distance, time, temperature, and mass, but not, for example, speed, which can be constructed out of distance and time), you can set a physical constant to 1 if you're willing to change how you measure that property. For example, you can express distance in terms of time (measuring distance in light-seconds or light-years). By doing so, you can discard the speed of light: set it to 1. Speeds are now ratios of time to time: something moving at 30% the speed of light would move 0.3 (light) seconds per second: their speed would be the dimensionless quantity 0.3. You can drop many other physical constants in this fashion: Offhead, the speed of light, the gravitational constant, planks constant, the coulomb constant, and the Boltzmann constant can all be set to 1 without any trouble, and therefore don't count against your complexity budget.

Comment author: Dre 10 July 2013 09:59:01PM 1 point [-]

First not: I'm not disagreeing with you so much as just giving more information.

This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.

Comment author: Locaha 08 May 2013 05:00:32PM 1 point [-]

Read only or mainly the abstracts

Feh. Dilettantes read the abstracts. Professionals read the Methods section.

Comment author: Dre 09 May 2013 12:59:42AM 6 points [-]

Professionals read the Methods section.

Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I'm a dilettante in many of them.

In response to comment by jooyous on Don't Get Offended
Comment author: Error 07 March 2013 02:52:16PM 0 points [-]

Upvoted for procedure, but there's something this doesn't cover: How to deal with an offended mental state when the offender is malevolent and disengagement under 3.a is impossible. That would be useful to know. For bonus points, answer from an epistemic rather than instrumental rationality perspective.

I've dealt with sociopaths recently -- or, if not sociopaths, at least Babyeaters. My usual offense strategy is very similar to yours and had no decision path for the situation. The gap is kind of on my mind.

In response to comment by Error on Don't Get Offended
Comment author: Dre 07 March 2013 05:24:27PM 1 point [-]

My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from "agent" to "complicated part of the environment" should reduce your reaction to them. You don't get offended when your computer gives you weird error messages.

Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with then often enough for them to notice. But especially for infrequent interactions and one time interactions I find this to be a good way to get through potentially offensive situations.

Comment author: handoflixue 22 February 2013 12:11:42AM 4 points [-]

Your edit pretty much captures my point, yes :) If nothing else, a Weak Friendly AI should eliminate a ton of the trivial distractions like war and famine, and I'd expect that humans have a much more unified volition when we're not constantly worried about scarcity and violence. There's not a lot of current political problems I'd see being relevant in a post-AI, post-scarcity, post-violence world.

Comment author: Dre 22 February 2013 05:21:43PM 1 point [-]

The problem is that we have to guarantee that the AI doesn't do something really bad while trying to stop these problems; what if it decides it really needs more resources suddenly, or needs to spy on everyone, even briefly? And it seems (to me at least) that stopping it from having bad side effects is pretty close, if not equivalent to, Strong Friendliness.

Comment author: Dre 19 January 2013 06:59:47AM 4 points [-]

I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?

I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.

View more: Next