It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:
Not a core competency of the sort of people LW attracts.
Rewards not as immediate as the sort of epiphany porn that some of LW generates.
Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.
LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.
Do you have any experience doing this successfully? I'd assume that powerful people already have lots of folks trying to make friends with them.
Specifically for business, I do.
The general angle is asking intelligent, and forward-pointing questions, specifically because deep processing for thoughts (as described in Thinking Fast and Slow) is rare, even within the business community; so demonstrating understanding, and curiosity (both of which are strength of people on LW) is an almost instant-win.
Two of the better guides on how to approach this intelligently are:
The other aspect of this is Speaking the Lingo. The problem with LW is:
1, people developing gravity wells around specific topics , and having a very hard time talking about stuff others are interested in without bringing up pet topics of their own; and
2, the inference distance between the kind of stuff that puts people into powerful position, and the kind of stuff LW develops a gravity well around is, indeed, vast.
The operational hack here is 1, listening, 2, building up the scaffolds on which these people hang their power upon; 3, recognizing whether you have an understanding of how those pieces fit together.
General algorithm for the network...
I'm not sure that being a rationalist gives you a significant advantage in interpersonal relationships. A lot of our brain seems to be specifically designed for social interactions; trying to use the rational part of your brain to do social interactions is like using a CPU chip to do graphics instead of a GPU; you can do it, but it'll be slower and less efficient and effective then using the hardware that's designed for that.
I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?
The downsides of talking to strangers are really, really low. Your feelings of anxiety are just lies from your brain.
I've found that writing a script ahead of time for particular situations, with some thoughts of different possible variations in how the conversation could go.
Honestly, not sure I understand the question.
"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.
I feel like I read something interesting about this on Mark Manson's blog but it's horribly organized so I can't find it now.
I've been reading PUA esque stuff lately and something they stress is that "the opener doesn't matter", "you can open with anything". This is in contrast to the older, cheesier, tactic based PUAs who used to focus obsessively over finding the right line to open with. This advice is meant for approaching women in bars, but I imagine it holds true for most ocassions you would want to talk to a stranger.
In general if you're in a social situation where strangers are approaching each other, then people are generally receptive to people approaching them and will be grateful that you are putting in the work of initiating contact and not them. People also understand that it's sometimes awkward to initiate with strangers, and will usually try to help you smooth things over if you initially make a rough landing. If you come in awkwardly, then you can gauge their reaction, calibrate to find a more appropriate tone, continue without drawing attention to the initial awkwardness, and things will be fine.
Personally, I think the best way to open a conversation with a stranger would just be to go up to them and say "Hey, I'm __" and offer a handshake. It's straightfo...
If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.
Here's a recent example (with a lady sitting beside me in the aeroplane; translated):
from which it was trivially easy to start a conversation.
She had been born in Brazil to Italian parents, had gone to school in Italy, and was working in the French-speaking part of Switzerland.
A "meeting" mindset implies equal footing and good intentions, which should be the foundation for any kind of positive human interaction.
Beware of 'should'. Subscribing to this ideal of equality rules out all sorts of positive human interactions that are not equal yet still beneficial. In fact, limiting oneself to human interactions on an equal footing would be outright socially crippling.
A good way to start is to say something about your situation (time, place, etc.). After that, I guess you could ask their names or something. I consider myself decent at talking to strangers, but I think it's less about what you say and more about the emotions you train yourself to have. If you see strangers as friends waiting to be made on an emotional level, you can just talk to them the way you'd talk to a friend. Standing somewhere with lots of foot traffic holding a "free hugs" sign under the influence of something disinhibiting might be helpful for building this attitude. If you currently are uncomfortable talking to strangers then whenever you do it, afterwards comfort yourself internally the same way you might comfort an animal (after all, you are an animal) and say stuff like "see? that wasn't so bad. you did great." etc. and try to build comfort through repeated small exposure (more).
I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.
In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.
3) If you want to approach a person at a bar you might feel anxiety.
I recently found a nice mind hack for that: “What would my drunken self do?”
Data point counter to the other two replies you've gotten: I -- and, I perceive, most people, both introverted and extraverted -- am neither overjoyed nor horrified to have someone attempt to start a conversation with me on an airplane. I would say that as long as you can successfully read negative feedback, and disengage from the conversation, it is absolutely reasonable to attempt to start a conversation with a stranger next to you on an airplane.
Now, I can't tell if the objection is to 1) the mere act of attempting to talk to someone on an airplane at all, which I can't really understand, or 2) to the particular manner of your attempt, which does seem a bit talkative / familiar, and could perhaps be toned down.
Someone introducing themselves to you produces "seething, ulcerating rage"? Have you ever considered counseling or therapy?
In comment threads to feminist blog posts in reaction to a particular xkcd comic, I've seen good reasons why certain people might be very pissed off when other people try to talk to them somewhere they cannot get away from, though they mostly apply to women being talked to by men.
Sucks to be that person. Solution! Don't be that person!
Or, more precisely, if you are that person then do the personality development needed to remove the undesirable aspects of that social conditioning.
(You can not control others behaviour in the past. Unless they are extraordinarily good predictors, in which case by all means wreak acausal havoc upon them to prevent their to-be-counterfactual toxic training.)
Yikes. Duly noted. That is a useful data point, and it's the sort of the thing I need to keep in mind. I'm an extrovert temperamentally, and I grew up in a culture that encourages extroversion. This has mostly been an apparent advantage in social situations, because the people from whom you get an overt response are usually people who either share or appreciate that personality trait. But I've begun to realize there is a silent minority (perhaps a majority?) of people who find behavior like mine excessively familiar, annoying, perhaps even anxiety-inducing. And for various reasons, these people are discouraged from openly expressing their preferences in this regard in person, so I only hear about their objections in impersonal contexts like this.
I usually try to gauge whether people are receptive to spontaneous socializing before engaging in it, but I should keep in mind that I'm not a perfect judge of this kind of thing, and I probably still end up engaging unwilling participants. There is something selfish and entitled about recruiting a stranger into an activity I enjoy without having much of a sense of whether they enjoy it at all (especially if there are social pressures preventing them from saying that they don't enjoy it), and I should probably err on the side of not doing it.
I would guess that the part that caused such a strong reaction was this:
because I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances. Why the hell do people do that?
You're not just introducing yourself: you are putting pressure on the other person to be social, both with the notion that you would find sitting in silence "excruciatingly" uncomfortable, and with the implication that a lack of communication is unusual and unacceptable.
Usually if somebody would introduce themselves and try to start a conversation, one could try to disengage, either with a polite "sorry, don't feel like talking" or with (more or less) subtle hints like giving short one-word responses, but that already feels somewhat impolite and is hard for many people. Your opening makes it even harder to try to avoid the conversation.
Hmm... good point. What I typed isn't exactly what I usually say, but I do tend to project my personal opinion that sitting quietly side by side is awkward and alien (to me) behavior. I can see how conveying that impression makes it difficult to disengage. And while I do find the silence pretty damn awkward, other people have no obligation to cater to my hang-ups, and its kind of unfair to (unconsciously) manipulate them into that position. So on consideration, I'm retracting my initial post and reconsidering how I approach these conversations.
My suggestion: say “Hi” while looking at them; only introduce yourself to them if they say “Hi” back while looking back at you, and with an enthusiastic-sounding tone of voice.
(Myself, I go by Postel's Law here: I don't initiate conversations with strangers on a plane, but don't freak out when they initiate conversations with me either.)
As far as I'm concerned, although people like RolfAndreasson exist, they should in no way be included in the model of 'average person'. Seething rage at a mere unsolicited introduction is totally un-ordinary and arguably self-destructive behaviour, and I have no compunction about saying that RA definitely needs to recalibrate his own response, not you.
My impression of your introductory thing is that it's overly involved, maybe slightly overbearing. You don't need to justify yourself, just introduce yourself. A general rule that I've found reliable for social situations is "Don't explain things if explanations haven't been requested (unless you happen to really enjoy explaining this thing)"; it stops me from coming across as (or feeling) desperate and lets people take responsibility for their own potential discomfort.
Don't err on the side of not doing it. People are already encouraged to be way too self-involved, isolated, and "individualistic". Doing things together is good, especially if they challenge you both (whether that's by temporary discomfort, new concepts, or whatever). If they don't want to be involved let them take responsibility for communicating that, because it is their responsibility.
Do you not see the irony of forcing yourself on other people, despite their wishes, and justifying this by saying that they're too self-involved?
You are sitting so close to someone that parts of your bodies probably touch, you smell them, you feel them, you hear them. The one doing the forcing with all that is the evil aircraft company, and though it's customary to regard such forced close encounters as "non-spaces" by pretending that no, you're not crammed in with a stranger for hours and hours, the reality is that you are.
The question is how you react to that, and offering to acknowledge the presence of the other and to find out their wishes regarding the flight is the common sense thing to do. Like pinging a server, if you will. If you don't ask, you won't find out.
Well, if there are non-verbal hints (looking away etc), by all means, stay quiet. However, you probably clearly notice that a protocol which forbids offering to start a conversation would result in countless acquaintances and friends never meeting, even if both may have preferred conversation.
In the end, even to an introvert, simply stating "Oh hello, I'm so and so, unfortunately I have a lot on my mind, I'm sure you understand" isn't outside the bounds of the reasonable. Do you disagree?
As someone who has been "trapped" in dozens of conversations with someone seemingly nice but uninteresting it's surprisingly hard to straight up tell someone you don't want to talk to them. I
Exactly. I would be far more ok with a social norm that condoned introducing oneself to (and starting conversations with) people on plans if there was also a social norm that condoned saying "I don't want to talk to you. Kindly go away and leave me alone." Current social norms regard this as rude. (I take it our esteemed extrovert colleagues see the problem here.)
Only in a very isolated point of view is introducing yourself to someone nearby an invasion. The rest of the world regards it as an ordinary action. Saying that you've got a different temperament does NOT excuse you from being an ordinary human being who can handle other people doing socially normal things that you have not yet explicitly okayed.
As a stranger, If I walk up to you and randomly try to hug you, THAT'S an invasion. If I try to talk to you, that's just Tuesday (so to speak).
Please note that I'm not in any way suggesting anyone should force their company on another. I'm just saying, if you have ANY major reaction to something as ordinary as someone trying to introduce themselves to you, it is YOU that has the problem and you should be looking at yourself to see why you are having this extreme reaction to a non-extreme circumstance. On the other side of the equation, if you have introduced yourself and received a prompt and clear rejection, if you react majorly to that in any way (including forcing your continued company on them), you also have a problem of a similar nature.
If anyone is on either side of that equation, they have a problem with their emotional calibration...
The sense in which I'm promoting getting involved is not a 'do stuff! with people! cause it feels good!' sense -- that's just the how. I'm trying to point out that when you really get involved, you stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way; And that's priceless, something that EVERYONE needs.
I really recommend not framing that sort of thing as a series of orders mixed with insults.
You are claiming to speak for all introverts, which turns this into an "introvert v extrovert" discussion. In other words, you are saying that half the population is forcing themselves onto the introverted half of the population. In reality, introverts are often the MOST happy that someone else initiated a conversation that they would be too shy to start themselves.
In reality, the situation is more like "NTs v non-NTs", and you are speaking for the non-NT part of the population. The same way you say half the population shouldn't force their preferences on the other half, I'm sure you can agree that 5% of the population shouldn't force their preferences (of non-interaction) onto the other 95%. Especially when the cost of nobody ever initiating conversations is significantly higher than the cost of being momentarily bothered by another person.
Actionable advice (for stopping an unwanted interaction): Answer in monosyllables or "hmm.." sounds. DON'T look at the person and smile. Maintain a neutral expression. Pull out your phone or a book, and direct your attention towards it, instead of the person.
Ways to end the conversation in a polite way: Say "...
I think it was NT as in NeuroTypical (not on the autism spectrum), not NT as in intuitive-thinking.
Social norms are definitely written mostly by extraverts.
I believe this is more true of America than a number of other cultures.
In general, if you suggest a course of action to others that includes the word "just", you may be doing it wrong.
Very much this. Here's an excellent essay on the subject of "lullaby words", of which "just" is one. (The author suggests mentally replacing "just" with "have a lot of trouble to" in such formulations.)
If I have lost a puppy,
I desire to believe that I have lost a puppy.
If I have not lost a puppy,
I desire to believe that I have not lost a puppy.
Let me not become attached to puppies I may not want.
I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.
I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.
If no question you ask is ever considered stupid, you're not checking enough of your assumptions.
Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.
An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They're not trying to draw conclusions about the state of the world, they're trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said "This is just a mess of tautologies (Bayes' theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?", how would you respond? Presumably you'd tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn't predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).
So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It's like understanding any other selection effect -- in order to properly interpret the significance of pieces of evidence you collect, you need to ...
If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.
The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.
Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?
Is willpower, in the short-term at least, a limited and depletable resource?
I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:
...[After criticizing studies trying to show that willpower is a resource that depends on glucose]
What about the more general notion that “willpower” is a “resource” that gets consumed or expended when one exerts self-control? First and foremost, let’s keep in mind that the idea is inconsistent with the most basic facts about how the mind works. The mind is an information-processing device. It’s not a hydraulic machine that runs out of water pressure or something like that. Of course it is a physical object, and of course it needs energy to operate. But mechanics is the wrong way to understand, or explain, its action, because changes in complex behavior are due to changes in information processing. The “willpower as resource” view abandons these intellectual gains of the cognitive revolution, and has no place in modern psychology. That leaves the question, of course, about what is going on in these studies.
Let’s back up for a moment and think about what th
Why is everyone so intereted in decision theory? Especially the increasingly convoluted variants with strange acronyms that seem to be popping up
As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer's conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don't work in the presence of things like other agents that can read the AI's source code, so we need to find some non-obvious guesses because that's something that could actually happen.
Hey, I think your tone here comes across as condescending, which goes against the spirit of a 'stupid questions' thread, by causing people to believe they will lose status by posting in here.
the sci-fi bit is only to make it easier to think about. The real world scenarios it corresponds to require the reader to have quite a bit more background material under their belt to reason carefully about.
When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?
Lawyer's perspective:
People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:
I have severe injuries, caused by that other person hitting me with their car. I want that person's driver's license taken away.
Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.
In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.
The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said "I was maliciously and negligently injured by that person's driving. I want them in prison." At that point, my response needs to detangle a lot of confusions before I can say anything useful.
To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.
If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?
Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.
Nope, not for everyday decisions. For me "remember to update" is more of a mantra to remember to change your mind at all - especially based on several pieces of weak evidence, which normal procedure would be to individually disregard and thus never change your mind.
How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.
My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.
This was a big realization for me personally:
If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".
Also, Succeed Socially is a good resource.
Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.
Do what your comparative advantage is.
In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.
So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.
Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.
Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.
Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.
Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).
What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?
Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)
Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.
Well, no offense, but I'm not sure you are aware of the need for Friendliness in Obedient AI, or rather, just how much F you need in a genie.
If you were to actually figure out how to build a genie you would have figured it out by trying to build a CEV-class AI, intending to tackle all those challenges, tackling all those challenges, having pretty good solutions to all of those challenges, not trusting those solutions quite enough, and temporarily retreating to a mere genie which had ALL of the safety measures one would intuitively imagine necessary for a CEV-class independently-acting unchecked AI, to the best grade you could currently implement them. Anyone who thought they could skip the hard parts of CEV-class FAI by just building a genie instead, would die like a squirrel under a lawnmower. For reasons they didn't even understand because they hadn't become engaged with that part of the problem.
I'm not certain that this must happen in reality. The problem might have much kinder qualities than I anticipate in the sense of mistakes naturally showing up early enough and blatantly enough for corner-cutters to spot them. But it's how things are looking as a default after becoming engaged with the problems of CEV-class AI. The same problems show up in proposed 'genies' too, it's just that the genie-proposers don't realize it.
It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.
Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.
However, ...
I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.
When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?
We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.
We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:
...Large-scale surveillance efforts are ethically problematic and face major political resistance, and it seems unlikely that current political opinion would support the creation of a far-reaching surveillance network for the sake of AGI risk alone. The extent to which such extremes would be necessary depends on exactly how easy it would be to develop AGI in secret. Although several authors make the point that AGI is much easier to develop unnoticed than something like nuclear weapons (McGinnis 2010; Miller 2012), cutting edge high-tech research does tend to require major investments which might plausibly be detected even by less elaborate surveillance efforts. [...]
Even under such conditions, there is no clear way to define what counts as dangerous AGI. Goertzel and Pitt (2012) point out that there is no clear division between narrow AI and AGI, and attempts to establish such criteria have failed. They argue that since AGI has a nebulous def
Butler (1863) writing that “war to the death should be instantly proclaimed”
I had no idea that Herbert's Butlerian Jihad might be a historical reference.
How close are we to illicit chip manufacturing?
Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it's a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.
* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.
I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:
It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.
As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?
a half-written computer game doesn't run at all
I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.
Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.
Less seriously, people like things that are cool.
EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?
1: It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.
2: It's an insurance policy against things that might wreck the earth but not other planets/solar systems.
3: Insofar as we can imagine there to be other alien races, understanding space colonization is extremely important either for trade or self defense.
4: It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.
Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?
How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?
Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.
Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.
(non-existent)
This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.
Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)
Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.
Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...
Climb a tree.
Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.
Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?
Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significan...
Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!
Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!
Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:
"The courage to keep your secret to yourself!"
"The courage to lie to your lover!"
"The courage to betray your comrades!"
"The courage to be a lazy bum!"
"The courage to admit defeat!"
In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility.
Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.
With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of...
The usual advice on how to fold a t-shirt starts with the assumption that your t-shirt is flat, but I'm pretty sure that getting the shirt flat takes me longer than folding it. My current flattening method is to grab the shirt by the insides of the sleeves to turn it right-side out, then grab the shoulder seams to shake it flat. Is there anything better?
In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.
My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don't have beliefs in the sense that LW uses the word. People just say words, mostly words that they've heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.
In general, I think it's a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don't ask "what do these people believe?" but "what do these people do?" The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.
In the form of religious stories or perhaps advice from a religious leader. I should've been more specific than "life situations": my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.
If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity,
Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.
and that at least personally I have radically changed my worldview a whole bunch of times,
This seems irrelevant to the truth of Christianity.
then it seems like I should assign at least a 5% or so probability to Christianity being true.
That probability is way too high.
http://en.wikipedia.org/wiki/List_of_religious_populations
How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can't use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal's wager but the biggest to me is it's impossible to choose WHICH pascal's wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it's belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of "christianity" being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.
How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it?
There are also various Christian's who believe that other Christian's who follow Christianity the wrong way will go to hell.
That is eerily similar to an Omega who deliberately favours specific decision theories instead of their results.
I don't think anyone thinks a Catholic priest can turn wine into blood on command.
Neither do Catholics think their priests turn wine into actual blood. After all, they're able to see and taste it as wine afterwards! Instead they're dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.
Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don't come true, and so it's more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.
If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.
Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.
Like how Knox's roommate's death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.
Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.
The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?
Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI
Sees thread title
Perfect.
So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.
Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.
How do people construct priors? Is it worth trying to figure out how to construct better priors?
Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.
How does a rational consequentialist altruist think about moral luck and butterflies?
There's no point in worrying about the unpredictable consequences of your actions because you have no way of reliably affecting them by changing your actions.
In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?
What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.
Why is average utilitarianism popular among some folks here? The view doesn't seem to be at all popular among professional population ethicists.
What experiences what you anticipate in a world where utilitarianism is true that you wouldn't anticipate in a world where it is false?
In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn't change my opinion. In the latter world, I don't.
I'd like to use a prediction book to improve my calibration, but I think I'm failing at a more basic step: how do you find some nice simple things to predict, which will let you accumulate a lot of data points? I'm seeing predictions about sports games and political elections a lot, but I don't follow sports and political predictions both require a lot of research and are too few and far between to help me. The only other thing I can think of is highly personal predictions, like "There is a 90% chance I will get my homework done by X o'clock", but what are some good areas to test my prediction abilities on where I don't have the ability to change the outcome?
I keep hearing about all sorts of observations that seem to indicate Mars once had oceans (the latest was a geological structure that resembles Earth river deltas). But on first sight it seems like old dried up oceans should be easy to notice due to the salt flats they’d leave behind. I’m obviously making an assumption that isn’t true, but I can’t figure out which. Can anyone please point out what I’m missing?
As far as I can tell, my assumptions are:
1) Planets as similar to Earth as Mars is will have similarly high amounts of salt dissolved in their oceans...
Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty? His ethos consisted of S.M.I**2.L.E, i.e. Space Migration + Intelligence Increase + Life Extension which seems like it should be right up your alley to me. His books are not well-organized; his live presentations and tapes had some wide appeal.
I am generally surprised when people say things like "I am surprised that topic X has not come up in forum / thread Y yet." The set of all possible things forum / thread Y could be talking about is extremely large. It is not in fact surprising that at least one such topic X exists.
If, as Michael Rose argues, our metabolisms revert to hunter-gatherer functioning past our reproductive years so that we would improve our health by eating approximations of paleolithic diets, does that also apply to adaptations to latitudes different from the ones our ancestors lived in?
In my case, I have Irish and British ancestry (my 23andMe results seem consistent with family traditions and names showing my origins), yet my immediate ancestors lived for several generations in the Southern states at latitudes far south from the British Isles. Would I improve my health in middle age by moving to a more northerly latitude, adopting a kind of "paleo-latitude" relocation analogous to adopting paleolithic nutrition?
What is more precious - the tigers of India, or lives of all the people eaten every year by the tigers of India?
A bit of quick Googling suggests that there are around 1500 tigers in India, and about 150 human deaths by tiger attack every year (that's the estimate for the Sundarbans region alone, but my impression is that tiger attack deaths outside the Sundarbans are negligible in comparison). Given those numbers, I would say that if the only way to prevent those deaths was to eliminate the tiger population and there wouldn't be any dire ecological consequences to the extinction, then I would support the elimination of the tiger population. But in actual fact, I am sure there are a number of ways to prevent most of those deaths without driving tigers to extinction, so the comparison of their relative values is a little bit pointless.
This will prevent tiger extinction in the same way it prevented cow extinction,
There are lots of reasons why farming cows is significantly easier than farming tigers.
Given Elizier Yudowksy's Peggy Sue parody, is there anything inherent innane about the Peggy Sue fanfic type? If so, I've missed it- what is it?
How do you deal with Munchhausen trilemma? It used to not bother me much, and I think my (axiomatic-argument based) reasoning was along the lines of "sure, the axioms might be wrong, but look at all the cool things that come out of them." The more time passes, though, the more concerned I become. So, how do you deal?
I have a question about the first logic puzzle here. The condition "Both sane and insane people are always perfectly honest, sane people have 100% true beliefs while insane people have 100% false beliefs" seems to be subtly different from Liar/Truth-teller. The Liar/Truth-teller thing is only activated when someone asks them a direct yes or no question, while in these puzzles the people are volunteering statements on their own.
My question is this: if every belief that an insane person holds is false, then does that also apply to beliefs about the...
Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI
Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”ba
If I am interested in self-testing different types of diets (paleo, vegan, soylent, etc.), how long is a reasonable time to try each out?
I'm specifically curious about how a diet would affect my energy level and sense of well-being, how much time and money I spend on a meal, whether strict adherence makes social situations difficult, etc. I'm not really interested in testing to a point that nutrient deficiencies show up or to see how long it takes me to get bored.
I'd like to work on a hardware project. It seems rather simple (I'd basically start out trying to build this ( pdf / txt )), however, my lack of vision makes it difficult to just go check the nearest Radioshack for parts, and I'm also a bit concerned about safety issues (how easy would it be for someone without an electrical engineering background to screw up the current? Could I cause my headphone jack to explode? Etc). I'm mostly wondering how one should go about acquiring parts for DIY electronics, provided that travel options are limited. (I've done so...
I debated over whether to include this in the HPMOR thread, but it's not specific to that story, and, well, it is kind of a stupid question.
How does backwards-only time travel work? Specifically, wouldn't a time traveler end up with dozens of slightly older or younger versions of herself all living at the same time? I guess "Yes" is a perfectly acceptable answer, but I've just never really seen the consequences addressed. I mean, given how many times Harry has used the Time Turner in HPMOR (just a convenient example), I'm wondering if there are ...
What are concrete ways that an unboxed AI could take over the world? People seem to skip from "UFAI created" to "UFAI rules the world" without explaining how the one must cause the other. It's not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.
Could someone sketch out a few example timelines of events for how a UFAI could take over the world?
Is true precommitment possible at all?
Human-wise this is an easy question, human will isn't perfect, but what about an AI? It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit, which means at least one prior was 100% and that in turn means no update is possible for this prior.
Why doesn't the Copernican Principle apply to inferences of the age and origins of the universe? Some cosmologists argue that we live in a privileged era of the universe when we can infer its origins because we can still observe the red shift of distant galaxies. After these galaxies pass beyond the event horizon, observers existing X billion years from now in our galaxy wouldn't have the data to deduce the universe's expansion, its apparent age, and therefore the Big Bang.
Yet the Copernican Principle denies the assumption that any privileged observers of ...
The copernican principle is a statement of ignorance - it's a caution against making up the claim that we're at the center of the universe. This is to be distinguished from the positive knowledge that the universe is a uniform blob.
Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?
I've been working pretty much every day for the past year but I had two longish breaks. After each of them there was a long period of feeling pretty awful all the time. I figured out eventually that this was probably how long it took me to forget what ok feels like. Is this plausible or am I probably ok given sufficient sleep and adequate diet?
Also, does mixing modafinil and starting strength sound like a bad idea? I know sleep is really important for recovery and gainz but SS does not top out at anything seriously strenuous for someone who isn't ill and demands less than 4 hours gym time a week.
Why are we throwing the word "Intelligence" around like it actually means anything? The concept is so ill-defined It should be in the same set with "Love."
I can't tell whether you're complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it's an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer's working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.
When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of "good" is basically the ultimate goal of MIRI.
I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.
This seems like matching "chemistry" to "sexual" in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.
r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well.