Dre
Message
128
41
Going off of what others have said, I'll add another reason people might satisfice with teachers.
In my experience, people agree much more about which teachers are bad than about which are good. Many of my favorite (in the sense that I learned a lot easily) teachers were disliked by other people, but almost all of those I thought were bad were widely thought of as bad. If you're not as interested in serious learning this might be less important.
So avoiding bad teachers requires a relatively small amount of information, but finding a teacher that is not just good, but good for you requires a much larger amount. So people reasonably only do the first part.
I thought this was an interesting critical take. Portions are certainly mind-killing, eg you can completely ignore everything he says about rich entrepreneurs, but overall it seemed sound. Especially the proving-too-much argument; the projections involve doing multiple revolutionary things, each of which would be a significant breakthroughs on its own. The fact that Musk isn't putting money into doing any of those suggests it would not be as easy/cheap as predicted (not just in a "add a factor of 5" way, but in a "the current predictions are...
If there are generally decreasing returns to measurement of a single variable, I think this is more what we would expect see. If you've already put effort into measurement of a given variable it will have lower information value on the margin. If you add in enough costs for switching measurements, then even the optimal strategy might spend a serious amount of time/effort pursuing lower value measurements.
Further, if they hadn't even thought of some measurements they couldn't have pursued them, so they wouldn't have suffered any declining returns.
I don't think this is the primary reason, but may contribute, especially in conjunction with reasons from sibling comments.
I don't know if this is exactly what you're looking for, but the only way I've found to make philosophy of identity meaningful is to interpret it as about values. In this reading questions of personal identity are what you do/should value as "yourself".
Clearly you-in-this-moment is yourself. Do you value you-in-ten-minutes the same as yourself-now? ten years? simulations?, etc. Then Open Individualism (based on my cursory googling) would say we should value everyone (at all times?) identically as ourselves. Then it's clearly descriptively false, and, at least to me, seems highly unlikely to be any sort of "true values", so it's false.
First not: I'm not disagreeing with you so much as just giving more information.
This might buy you a few bits (and lots of high energy physics is done this way, with powers of electronvolts the only units here). But there will still be free variables that need to be set. Wikipedia claims (with a citation to this John Baez post) that there are 26 fundamental dimensionless physical constants. These, as far as we know right now, have to be hard coded in somewhere, maybe in units, maybe in equations, but somewhere.
Professionals read the Methods section.
Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I'm a dilettante in many of them.
My strategy in situations like that is to try to get rid of all respect for the person. If to be offended you have to care, at least on some level, about what the person thinks then demoting them from "agent" to "complicated part of the environment" should reduce your reaction to them. You don't get offended when your computer gives you weird error messages.
Now this itself would probably be offensive to the person (just about the ultimate in thinking of them as low status), so it might not work as well when you have to interact with the...
The problem is that we have to guarantee that the AI doesn't do something really bad while trying to stop these problems; what if it decides it really needs more resources suddenly, or needs to spy on everyone, even briefly? And it seems (to me at least) that stopping it from having bad side effects is pretty close, if not equivalent to, Strong Friendliness.
I worry that this would bias the kind of policy responses we want. I obviously don't have a study or anything, but it seems that the framing of the War on Drugs and the War on Terrorism have encouraged too much violence. Which sounds like a better way to fight the War on Terror, negotiating in complicated local tribal politics or going in and killing some terrorists? Which is actually a better policy?
I don't know exactly how this would play out in a case where no violence makes sense (like the Cardiovascular Vampire). Maybe increased research as part of a "war effort" would work. But it seems to me that this framing would encourage simple and immediate solutions, which would be a serious drawback.
This feels like reading too much into it, but is
and each time the inner light pulsated, the assembly made a vroop-vroop-vroop sound that sounded oddly distant, muffled like it was coming from behind four solid walls, even though the spinning-conical-section thingy was only a meter or two away.
supposed to be something about the fourth wall?
I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamic...
Took most of it. I pressed enter accidentally after the charity questions. I would like to fill out the remainder. Is there a way I can do that without messing up the data?
Though I don't think its that simple because both sides are claiming that the other side is not reporting how they truly feel. One side claims that people are calling things creepy semi-arbitrarily to raise their own status, and the other claims that people are intentionally refusing to recognize creepy behavior as creepy so they don't have to stop it (or being slightly more charitable, so they don't take a status hit for being creepy).
But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.
I don't think this is the right place to report this, but I don't know where the right place is, and this is closest. In the title of the page for comments for the deleted account (eg) the name of the poster has not been redacted.
Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?
In the sense that there are multiple equilibriums or that there is no equilibrium for reflection?
Not necessarily. See Chlamer's reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the "internal" structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn't be close to large enough to simulate a (human) consciousness.
I'm coming to this party rather late, but I'd like to acknowledge that I appreciated this exchange more than just by upvoting it. Seeing in depth explanations of other people's emotions seems like the only way to counter Typical Mind Fallacy, but is also really hard to come by. So thanks for a very levelheaded discussion.