I don't get paid on the basis of Omega's prediction given my action. I get paid on the basis of my action given Omega's prediction. I at least need to know the base-rate probability with which I actually one-box (or two-box), although with only two minutes, I would probably need to know the base rate at which Omega predicts that I will one-box. Actually, just getting the probability for each of P(Ix|Ox) and P(Ix|O~x) would be great.
I also don't have a mechanism to determine if 1033 is prime that is readily available to me without getting hit by a trolley (...
Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.
Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'?
If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having ...
- You've chosen one of the easier aspects of consciousness: self-awareness rather than, eg. qualia.
I cover this a bit when I talk about awareness, but I find qualia to often be used in such a way as to obscure what consciousness is rather than explicate it. (If I tell you that consciousness requires qualia, but can't tell you how to distinguish things which have qualia from things which do not, along with good reason to believe that this way of distinguishing is legitimate, then rocks could have qualia.)
...
- The "necessarily biological" could be
I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actu...
On those criteria, I would say Plato. Because Plato came up with a whole mess of ideas that were... well, compelling but obviously mistaken. Much of Western Philosophy can be put in terms of people wrestling with Plato and trying to show just why he is wrong. (Much of the rest is wrestling with Aristotle and trying to show why HE is wrong... but then, one can put Aristotle into the camp of "people trying to show why Plato is wrong.")
There's a certain sort of person who is most easily aroused from inertia when someone else says something so blatantly, utterly false that they want to pull their hair out. Plato helped motivate these people a lot.
The New Organon, particularly Aphorisms 31-46, show not only an early attempt to diagnose human biases (what Bacon referred to as "The Idols of the Mind") but also some of the reasons why he rejected Aristotelian thought, common at the time, in favor of experimental practice.
Maybe there are better ways to expand than through spacetime, better ways to make yourself into this sort of maximizing agent, and we are just completely unaware of them because we are comparatively dull next to the sort of AGI that has a brain the size of a planet? Some way to beat out entropy, perhaps. That'd make it inconsistent to see any sort of sky with UFAI or FAI in it.
I can somewhat imagine what these sorts of ways would be, but I have no idea if those things are likely or even feasible, since I am not a world-devouring AGI and can only do wild ma...
I get that feeling whenever I hit a milestone in something: if I run a couple miles further than I had previously, if I understand something that was opaque before, if I am able to do something that I couldn't before, I get this "woo hoo!" feeling that I associate with levelling up.
Even if they are sapient, it might not have the same psychological effect.
The effect of killing a large, snarling, distinctly-not-human-thing on one's mental faculties and the effect of killing a human being are going to be very different, even if one recognizes that thing to be sapient.
If they are, Harry would assign moral weight to the act after the fact: but the natural sympathy that is described as eroding in the above quote doesn't seem as likely to be affected given a human being's psychology.
since I don't know what "philosophy" really is (and I'm not even sure it really is a thing).
I find it's best to treat philosophy as simply a field of study, albeit one that is odd in that most of the questions asked within the field are loosely tied together at best. (There could be a connection between normative bioethics and ontological questions regarding the nature of nothingness, I suppose, but you wouldn't expect a strong connection from the outset) To do otherwise invites counter-example too easily and I don't think there is much (if anything) to gain in asking what philosophy really is.
Technical note: some of these are Torts, not Crimes. (Singing Happy Birthday, Watching a Movie, or making an Off-Color Joke are not crimes, barring special circumstances, but they may well be Torts.)
Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)
Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.
I tend to think this is the wrong question.
Here's roughly what happens: there are various signals (light, air waves, particulates in the air) that humans have the capacity to detect and translate into neural states which can then be acted on. This is useful because the generation, presence, and redirection of these signals is affected by other objects in the world. So a human can not only detect objects that generate these signals, it can also detect how other objects around it are affected by these signals, granting information that the human brain can th...
I just hope that the newly-dubbed Machine Intelligence Research Institute doesn't put too much focus on advertising for donations.
That would create a MIRI-ad of issues.
Sorry, if I don't let the pun out it has to live inside my head.
Yes, instead of putting too much focus on advertising, they should put the correct amount of focus on it. The argument also applies to the number of pens.
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or pa...
I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.
I try to understand that thinking this way is odd...
"[10065] No route to host Error"
I figure the easiest way to delay a human on the other end of a computer is to simulate an error as best I can. For a GAI, this time is probably invaluable.
That depends on the level of explanation the teacher requires and the level of the material. I'd say that at least until you get into calculus, you can work off of memorizing answers. I'd even go so far as to say that most students do, and succeed to greater or lesser degrees, based on my tutoring experiences. I am not sure to what degree you can "force" understanding: you can provide answers that require understanding, but it helps to guide that process.
I went to a lot of schools, so I can contrast here.
I had more than one teacher that taught me...
I think the worry is that they are only concerned about getting the answer that gets them the good grade, rather than understanding why the answer they get is the right answer.
So you end up learning the symbols "5x5=25," but you don't know what that means. You may not even have an idea that corresponds to multiplication. You just know that when you see "5x5=" you write "25." If I ask you what multiplication is, you can't tell me: you don't actually know. You are disconnected from what the process you are learning is supposed to be tracking, because all you have learned is to put in symbols where you see other symbols.
I think you are discounting effects such as confirmation bias, which lead us to notice what we expect and can easily label while leading us to ignore information that contradicts our beliefs. If 99 out of 100 women don't nag and 95 out of 100 men don't nag, given a stereotype that women nag, I would expect people think of the one woman they know that nags, rather than the 5 men they know that do the same.
Frankly, without data to support the claim that:
There is a lot of truth in stereotypes
I would find the claim highly suspect, given even a rudimentary understanding of our psychological framework.
I seriously doubt that most people who make up jokes or stereotypes truly have enough data on hand to reasonably support even a generalization of this nature.
Groupthink is as powerful as ever. Why is that? I'll tell you. It's because the world is run by extraverts.
The problem with extraverts... is a lack of imagination.
pretty much everything that is organized is organized by extraverts, which in turn is their justification for ruling the world.
This seems to be largely an article about how we Greens are so much better than those Blues rather than offering much that is useful.
I don't have the answer but would be extremely interested in knowing it.
(Sorry this comment isn't more helpful. I am trying to get better at publicly acknowledging when I don't know an answer to a useful question in the hopes that this will reduce the sting of it.)
A potential practical worry for this argument: it is unlikely that any such technology will grant just enough for one dose for each person and no more, ever. Most resources are better collected, refined, processed, and utilized when you have groups. Moreover, existential risks tend to increase as the population decreases: a species with only 10 members is more likely to die out than a species with 10 million, ceteris paribus. The pill might extend your life, but if you have an accident, you probably need other people around.
There might be some ideal number...
Where is the incentive for them to consider the public interest, save for insofar as it is the same as the company interest?
It sounds like you think there is a problem: that executives being ruthless is not necessarily beneficial for society as a whole. But I don't think that's the root problem. Even if you got rid of all of the ruthless executives and replaced them with competitive-yet-conscientious executives, the pressures that creates and nurtures ruthless executives would still be in place. There are ruthless executives because the environment favors them in many circumstances.
Edited. Thanks.
Your title asks a different question than your post: "useful" vs. being a "social virtue."
Consider two companies: A and B. Each has the option to pursue some plan X, or its alternative Y. X is more ruthless than Y (X may involve laying off a large portion of their workforce, a misinformation campaign, or using aggressive and unethical sales tactics) but X also stands to be more profitable than Y.
If the decision of which plan to pursue falls to a ruthless individual in company A, company A will likely pursue X. If the decision falls to a...
You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.
So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.
Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardwar...
Definitely. These are the sorts of things that would need to be evaluated if my very rough sketch were to be turned into an actual theory of values.
Well, effectiveness and desire are two different things.
That aside, you could be posting for desires that are non-status related and still desire status. Human beings are certainly capable of wanting more than one thing at a time. So even if this post was motivated by some non-status related desire, that fact would not, in and of itself, be evidence that you don't desire status.
I'm not actually suggesting you update for you: you have a great deal more access to the information present inside your head than I do. I don't even have an evidence-based argument...
Eh... but people like rock stars even though most people are NOT rock stars. People like people with really good looks even though most people don't have good looks. And most people do have some sort of halo effect on wealthy people they actually meet, if not "the 1%" as a class.
I am not sure that a person who has no desire for status will write a post about how they have no desire for status that much more often than someone who does desire status. Particularly if this "desire" can be stronger or weaker. So it could be:
A- The person re...
I upvoted it because the minimum we'd get without running a study would be anecdotal evidence.
I'm not sure that there is a close link between "status" and "behaving." Most of the kids I knew who I would call "status-seeking" were not particularly well behaved: often the opposite. Most of the things you are talking about seem to fall into "good behavior" rather than "status."
Additionally... well, we'd probably need to track a whole lot of factors to figure out which ones, based on your environment, would be selected for. And currently, I have no theory as to which timeframes would be the most important to look at, which would make such a search more difficult.
I wouldn't say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So:
If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up.
Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.
Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...
In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.
This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.
Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain...
Hm. I know that the biological term may not be quite right here (although the brain is biological, scaling this idea up may be problematic) but I have wondered if certain psychological traits are not epigenetic: that is, it isn't that you are some strange mutant if you express terminal value X strongly and someone else expresses it weakly. Rather, that our brain structures lead to a certain common set of shared values but that different environmental conditions lead to those values being expressed in a stronger or weaker sense.
So, for instance, if "st...
The fact that I won't be able to care about it once I am dead doesn't mean that I don't value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don't want future sapient life to be wiped out, and that is a statement about my current preferences, not my 'after death' preferences. (Which, as noted, do not exist.)
The difference is whether or not you care about sapience as instrumental or terminal values.
If I only instrumentally value other sapient beings existing, then of course, I don't care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)
But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?
So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course ...
I think I have a different introspection here.
When I have a feeling such as 'doing-whats-right' there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the 'doing-whats-right' emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.
But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily in...
I am not sure that all humans have the empathy toward humanity on the whole that is assumed by Adams here.
students might learn about the debate between Realism and Nominalism, and then be expected to write a paper about which one they think is correct (or neither). Sure, we could just tell them the entire debate was confused...
This would require a larger proportion of philosophy professors to admit that the debate is confused.
Working in philosophy, I see some move toward this, but it is slow and scattered. The problem is probably partially historical: philosophy PhDs trained in older methods train their students, who become philosophy PhDs trained in their professor's methods+anything that they could weasel into the system which they thought important. (which may not always be good modifications, of course)
It probably doesn't help that your average philosophy grad student starts off by TAing a bunch of courses with a professor who sets up the lecture and the material and the gr...
Well, you can get up to 99 points for being 99 percent confident and getting the right answer, or minus several hundred (I have yet to fail at a 99 so I don't know how many) for failing at that same interval.
Wrong answers are, for the same confidence interval, more effective at bringing down your score than right answers are at bringing it up, so in some sense as long as you are staying positive you're doing good.
But if you want to compare further, you'd have to take into account how many questions you've answered, as your lifetime total will be different ...
High score seems to be good in terms of "My confident beliefs tend to be right."
Having your bars on the graph line up with the diagonal line would be an "ideal" graph (neither over- nor under- confident)
To clarify, wrong in any of my answers at the 99% level. I have been wrong at other levels (including, surprisingly, hovering within around 1% of 90% at the 90% level.
Well, in 11 out of 145 answers (7.5%) I so far have answered 99%, and I have yet to be wrong in any of my answers.
If I continue at this rate, in approximately 1,174 more answers, I'll be able to tell you if I am well callibrated (less, if I fail at more than one answer in the intervening time)
Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1....
I had an interesting experience with this, and I am wondering if others on the male side had the same.
I tried to imagine myself in these situations. When a situation did not seem to have any personal impact from the first person or at best a very mild discomfort, I tried to rearrange the scenario with social penalties that I would find distressing. (Social penalties do differ based on gender roles)
I found this provoked a fear response. If I give it voice, it sounds like "This isn't relevant/I won't be in this scenario/You would just.../Why are you doi...
Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.
Suggestions (for general audience outside of LW/Rationalist circles)
I like the name "Confidence Game"- reminds people of a con game while informing you as to the point of the game.
Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but "losing" only a couple. (Same effect on scores, either way) This won't seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.
Setting it to a timer will make it ADDICTIVE. Set it up ...
One could judge the strength of these with a few empirical tests: such as for (2), comparing industries where it is clear that the skills learned in college (or in a particular major) are particularly relevant vs. industries where it is not as clear, and comparing the number of college grads w/ the relevant skill-signals vs. college grads w/o the relevant skill-signals vs. non-college grads; and for (3), looking to industries where signals of pre-existing ability in that industry do not conform to being in college and comparing their rate of hiring grads v... (read more)