Posts

Sorted by New

Wiki Contributions

Comments

I'm coming to this party rather late, but I'd like to acknowledge that I appreciated this exchange more than just by upvoting it. Seeing in depth explanations of other people's emotions seems like the only way to counter Typical Mind Fallacy, but is also really hard to come by. So thanks for a very levelheaded discussion.

Going off of what others have said, I'll add another reason people might satisfice with teachers.

In my experience, people agree much more about which teachers are bad than about which are good. Many of my favorite (in the sense that I learned a lot easily) teachers were disliked by other people, but almost all of those I thought were bad were widely thought of as bad. If you're not as interested in serious learning this might be less important.

So avoiding bad teachers requires a relatively small amount of information, but finding a teacher that is not just good, but good for you requires a much larger amount. So people reasonably only do the first part.

I thought this was an interesting critical take. Portions are certainly mind-killing, eg you can completely ignore everything he says about rich entrepreneurs, but overall it seemed sound. Especially the proving-too-much argument; the projections involve doing multiple revolutionary things, each of which would be a significant breakthroughs on its own. The fact that Musk isn't putting money into doing any of those suggests it would not be as easy/cheap as predicted (not just in a "add a factor of 5" way, but in a "the current predictions are meaningless" way).

Also, the fact he's proposing it for California seems strange. There are places with cheaper, flatter land where you could do a proof of concept before moving into a politically complicated, expensive, earthquake-prone state like California. I've seen Texas (Houston-Dallas-San Antonio) and Alberta (Edmonton-Calgary) proposed, both of which sound like much better locations.

If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you've already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.

Further, if they hadn't even thought of some measurements they couldn't have pursued them, so they wouldn't have suffered any declining returns.

I don't think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.

I don't know if this is exactly what you're looking for, but the only way I've found to make philosophy of identity meaningful is to interpret it as about values. In this reading questions of personal identity are what you do/should value as "yourself".

Clearly you-in-this-moment is yourself. Do you value you-in-ten-minutes the same as yourself-now? ten years? simulations?, etc. Then Open Individualism (based on my cursory googling) would say we should value everyone (at all times?) identically as ourselves. Then it's clearly descriptively false, and, at least to me, seems highly unlikely to be any sort of "true values", so it's false.

First not: I'm not disagreeing with you so much as just giving more information.

This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.

Professionals read the Methods section.

Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I'm a dilettante in many of them.

My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from "agent" to "complicated part of the environment" should reduce your reaction to them. You don't get offended when your computer gives you weird error messages.

Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with then often enough for them to notice. But especially for infrequent interactions and one time interactions I find this to be a good way to get through potentially offensive situations.

The problem is that we have to guarantee that the AI doesn't do something really bad while trying to stop these problems; what if it decides it really needs more resources suddenly, or needs to spy on everyone, even briefly? And it seems (to me at least) that stopping it from having bad side effects is pretty close, if not equivalent to, Strong Friendliness.

I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?

I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.

Load More