"Stupid" questions thread
r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (850)
Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:
What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.
Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI
Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.
I'd like to use a prediction book to improve my calibration, but I think I'm failing at a more basic step: how do you find some nice simple things to predict, which will let you accumulate a lot of data points? I'm seeing predictions about sports games and political elections a lot, but I don't follow sports and political predictions both require a lot of research and are too few and far between to help me. The only other thing I can think of is highly personal predictions, like "There is a 90% chance I will get my homework done by X o'clock", but what are some good areas to test my prediction abilities on where I don't have the ability to change the outcome?
Start with http://predictionbook.com/predictions/future
Predictions you aren't familiar with can be as useful as ones you are: you calibrate yourself under extreme uncertainty,and sometimes you can 'play the player' and make better predictions that way (works even with personal predictions by other people).
Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.
Not that I know of. The only current candidate is to take psilocybin to increase Openness, but the effect is relatively small, it hasn't been generalized outside the population of "people who would sign up to take psychedelics", and hasn't been replicated at all AFAIK (and for obvious reasons, there may never be a replication). People speculate that dopaminergic drugs like the amphetamines may be equivalent to an increase in Conscientiousness, but who knows?
I keep hearing about all sorts of observations that seem to indicate Mars once had oceans (the latest was a geological structure that resembles Earth river deltas). But on first sight it seems like old dried up oceans should be easy to notice due to the salt flats they’d leave behind. I’m obviously making an assumption that isn’t true, but I can’t figure out which. Can anyone please point out what I’m missing?
As far as I can tell, my assumptions are:
1) Planets as similar to Earth as Mars is will have similarly high amounts of salt dissolved in their oceans, conditional on having oceans. (Though I don’t know why NaCl in particular is so highly represented in Earth’s oceans, rather than other soluble salts.)
2) Most processes that drain oceans will leave the salt behind, or at least those that are plausible on Mars will.
3) Very large flat areas with a thick cover of salt will be visible at least to orbiters even after some billions of years. This is the one that seems most questionable, but seems sound assuming:
3a) a large NaCl-covered region will be easily detectable with remote spectroscopy, and 3b) even geologically-long term asteroid bombardment will retain, over sea-and-ocean-sized areas of salt flats, concentrations of salt abnormally high, and significantly lower than on areas previously washed away.
Again, 3b. sounds as the most questionable. But Mars doesn’t look like it its surface was completely randomized to a non-expert eye. I mean, I know the first few (dozens?) of meters on the Moon are regolith, which basically means the surface was finely crushed and well-mixed, and I assume Mars would be similar though to a lesser extent. But this process seems to randomize mostly locally, not over the entire surface of the planet, and the fact that Mars has much more diverse forms of relief seems to support that.
It not just NaCl, its lots of minerals that get deposited as the water they were dissolved in goes away - they're called 'evaporites'. They can be hard to see if they are very old if they get covered with other substances, and mars has had a long time for wind to blow teeny sediments everywhere. Rock spectroscopy is also not nearly as straightforward as that of gases.
One of the things found by recent rovers is indeed minerals that are only laid down in moist environments. See http://www.giss.nasa.gov/research/briefs/gornitz_07/ , http://onlinelibrary.wiley.com/doi/10.1002/gj.1326/abstract .
As for amounts of salinity... Mars probably never had quite as much water as Earth had and it may have gone away quickly. The deepest parts of the apparent Northern ocean probably only had a few hundred meters at most. That also means less evaporites. Additionally a lot of the other areas where water seemed to flow (especially away from the Northern lowlands) seem to have come from massive eruptions of ground-water that evaporated quickly after a gigantic flood rather than a long period of standing water.
What can be done about akrasia probably caused by anxiety?
From what I've seen valium helps to some extent.
Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?
I think I'm simply lazy.
But I've been able to cultivate caring about particular goals/activities/habits, and then, with respect to those, I'm not so lazy - because I found them to offer frequent or large enough rewards, and I don't feel like I'm missing out on any particular type of reward. If you think you're missing something and you're not going after it, that might make you feel lazy about other things, even while you're avoiding tackling the thing that you're missing head on.
This doesn't answer your question. If I was able to do that, then I'm not just lazy.
Taboo "lazy." What kind of a person are we talking about, and do they want to change something about the kind of person they are?
Beyond needing to survive, and maintain a reasonable health, a lazy person can just while their time away and not do anything meaningful (in getting oneself better - better health, better earning ability, learn more skills etc). Is there a fundamental need to also try to improve as a person? What is the rationale behind self improvement or not wanting to do so?
I don't understand your question. If you don't want to self-improve, don't.
What do you mean by lazy? How do you distinguish between laziness and akrasia? By lazy do you mean something like "unmotivated and isn't bothered by that" or do you mean something else?
I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:
It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.
As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?
I have/had this problem. My computer and shelves are full of partially completed (or, more realistically, just-begun) projects.
So, what I'm doing at the moment is I've picked one of them, and that's the thing I'm going to complete. When I'm feeling motivated, that's what I work on. When I'm not feeling motivated, I try to do at least half an hour or so before I flake off and go play games or work on something that feels more awesome at the time. At those times my motivation isn't that I feel that the project is worthwhile, it is that having gone through the process of actually finishing something will be have been worthwhile.
It's possible after I'm done I may never put that kind of effort in again, but I will know (a) that I probably can achieve that sort of goal if I want and (b) if carrying on to completion is hell, what kind of hell and what achievement would be worth it.
Beeminder. Record the number of Pomodoros you spend working on the project and set some reasonable goal, e.g. one a day.
I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.
This seems like a really good concept to keep in mind. I wonder if it could be applied to other fields? Could you make a pot that remains a pot the whole way through, even as you refine it and add detail? Could you write a song that starts off very simple but still pretty, and then gradually layer on the complexity?
Your post inspired me to try this with writing, so thank you. :) We could start with a one-sentence story: "Once upon a time, two lovers overcame vicious prejudice to be together."
And that could be expanded into a one-paragraph story: "Chanon had known all her life that the blue-haired Northerners were hated enemies, never to be trusted, that she had to keep her red-haired Southern bloodline pure or the world would be overrun by the blue barbarians. But everything was thrown in her face when she met Jasper - his hair was blue, but he was a true crimson-heart, as the saying went. She tried to find every excuse to hate him, but time and time again Jasper showed himself to be a man of honor and integrity, and when he rescued her from those lowlife highway robbers - how could she not fall in love? Her father hated it of course, but even she was shocked at how easily he disowned her, how casually he threw away the bonds of family for the chains of prejudice. She wasn't happy now, homeless and adrift, but she knew that she could never be happy again in the land she had once called home. Chanon and Jasper set out to unknown lands in the East, where hopefully they could find some acceptance and love for their purple family."
This could be turned into a one page story, and then a five page story, and so on, never losing the essence of the message. Iterative storytelling might be kind of fun for people who are trying to get into writing something long but don't know if they can stick it out for months or years.
I submit that this might generalize: that perhaps it's worth, where possible, trying to plan your projects with an iterative structure, so that feedback and reward appear gradually throughout the project, rather than in an all-or-nothing fashion at the very end. Tight feedback loops are a great thing in life. Granted, this is of no use for, for example, taking a degree.
In fact, the games I tend to make progress on are the ones I can get testable as quickly as possible. Unfortunately, those are usually the least complicated ones (glorified MUDs, an x axis with only 4 possible positions, etc).
I do want to do bigger and better things, then I run into the same problem as CronoDAS. When I do start a bigger project, I can sometimes get started, then crash within the first hour and never return. (In a couple extreme cases, I lasted for a good week before it died, though one of these was for external reasons). Getting started is usually the hardest part, followed by surviving until there's something work looking back at. (A functioning menu system does not count.)
Make this not true. Practice doing a bunch of smaller projects, maybe one or two week-long projects, then a month-long project. Then you'll feel confident that your work ethic is good enough to complete a major project without giving up.
Would it be worthwhile if you could guarantee or nearly guarantee that you will not just give up? If so, finding a way to credibly precommit to yourself that you'll stay the course may help. Beeminder is an option; so is publicly announcing your project and a schedule among people whose opinion you personally care about. (I do not think LW counts for this. It's too big; the monkeysphere effect gets in the way)
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.
Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more.
Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.
To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."
Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?
What your criteria for granting personhood. Is it binary?
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Why do you assume you're confused?
Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.
Three hypothesis which may not be mutually exclusive:
1) Some people disagree (with you) about whether or not some animals are persons.
2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration - here you've stipulated 'people' as 'things subject to moral concern', but that word may too connotative laden for this to be effective.
3) Some people disagree (with you) about 'person'/'being worthy of moral consideration' being a binary category.
Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.
Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).
The non-creepy socially accepted way is through body language. Strong eye contact, personal space invasion, prolonged pauses between sentences, purposeful touching of slightly risky area (for women: the lower back, forearms, etc.) all done with a clearly visible smirk.
In some context however the explicitly verbal might be effective, especially if toned down (Hey you're interesting, I want to know you better) or up (Hey, you're really sexy, do you want to go to bed with me?), but it is highly dependent on the woman.
I'm not entirely sure what's the parameter here, but I suspect plausible deniability is involved.
Tell a few friends, and let them do the asking for you?
The volume of people to whom I tend to be attracted to would make this pretty infeasible.
Well, outside of contexts where people are expected to be hitting on each other (dance clubs, parties, speed dating events, OKCupid, etc.) it's hard to advertise yourself to strangers without in being socially inappropriate. On the other hand, within an already defined social circle that's been operating a while, people do tend to find out who is single and who isn't.
I guess you could try a T-shirt?
It's not a question of being single, I'm actually in a relationship. However, the relationship is open and I would love it if I could interact physically with more people, just as a casual thing that happens. When I said telling everyone I met "you're cute want to make out" everyone was a lot closer to accurate than you may think when the average person would say it in that context.
Ah. So you need a more complicated T-shirt!
Incidentally, if you're interested in making out with men who are attracted to your gender, "you're cute want to make out" may indeed be reasonably effective. Although, given that you're asking this question on this forum, I think I can assume you're a heterosexual male, in which case that advice isn't very helpful.
How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?
Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.
Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.
I wish I could upvote this question more. People assuming that I meant more than exactly what I said drives me up the wall, and I don't know how to deal with it either. (but Qiaochu's response below is good)
The most common failure mode I've experienced is the assumption that believing equals endorsing. One of the gratifying aspects of participating here is not having to deal with that; pretty much everyone on LW is inoculated.
Be cautious, the vast majority do not make strict demarcation between normative and positive statements inside their head. Figuring this out massively improved my models of other people.
Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.
This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.
"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?
Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?
In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.
Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.
In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.
SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?
This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.
Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?
Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?
Surely we are the native americans, trying to avoid dying of Typhus when the colonists accidentally kill us in their pursuit of paperclips.
Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!
Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!
Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:
"The courage to keep your secret to yourself!"
"The courage to lie to your lover!"
"The courage to betray your comrades!"
"The courage to be a lazy bum!"
"The courage to admit defeat!"
Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.
Is it okay to ask completely off-topic questions in a thread like this?
As the thread creator, I don't really care.
How does a rational consequentialist altruist think about moral luck and butterflies?
http://leftoversoup.com/archive.php?num=226
There's no point in worrying about the unpredictable consequences of your actions because you have no way of reliably affecting them by changing your actions.
In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?
What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.
briefly describe the "subagents" and their personalities/goals?
My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It's working well, according to the dominant subagent.
My understanding is that this is what Internal Family Systems is for.
So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It's really not that complicated: I just have multiple terminal values which don't come with a natural weighting, and I find balancing them against each other hard.
With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of course doesn't work well...
I am the only one with that kind of problems, reading several fanfictions occurring in the same base universe ? It's the first time I try to do that, and I didn't except being so confused. Do you have some advices to avoid the confusion, like "wait at least one week (or month ?) before jumping to a different fanfiction" ?
For one thing, I try not to read many in-progress fanfics. I’ve been burned so many times by starting to read a story and finding out that it’s abandoned that I rarely start reading new incomplete stories – at least with an expectation of them being finished. That means I don’t have to remember so many things at once – when I finish reading one fanfiction, I can forget it. Even if it’s incomplete, I usually don’t try to check back on it unless it has a fast update schedule – I leave it for later, knowing I’ll eventually look at my Favorites list again and read the newly-finished stories.
I also think of the stories in terms of a fictional multiverse, like the ones in Dimension Hopping for Beginners and the Stormseeker series (both recommended). I like seeing the different viewpoints on and versions of a universe. So that might be a way for you to tie all of the stories together – think of them as offshoots of canon, usually sharing little else.
I also have a personal rule that whenever I finish reading a big story that could take some digesting, I shouldn’t read any more fanfiction (from any fandom) until the next day. This rule is mainly to maximize what I get out of the story and prevent mindless, time-wasting reading. But it also lessens my confusing the stories with each other – it still happens, but only sometimes when I read two big stories on successive days.
Write up your understanding of the melange, obviously.
How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.
My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.
The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.
Much of it boils down to gothgirl420666's advice, except with more technical help on how. (I think the book is well worth reading, but it basically outlines "these are places where you can expend effort to make other people happier.")
One of the tips from Carnegie that gothgirl420666 doesn't mention is using people names.
Learn them and use them a lot in coversation. Great them with their name.
Say thing like: "I agree with you, John." or "There I disagree with you, John."
This is a piece of advice that most people disagree with, and so I am reluctant to endorse it. Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.
(While we're on the subject of recommendations I disagree with, Carnegie recommends recording people's birthdays, and sending them a note or a call. This used to be a lot more impressive before systems to automatically do that existed, and in an age of Facebook I don't think it's worth putting effort into. Those are the only two from the book that I remember thinking were unwise.)
It probably depends on the context. If you in a context like a sales conversation people might get cautious. In other context you might like a person trying to be nice to you.
But you are right that there the issue of artificialness. It can be strange if things don't flow naturally. I think that's more a matter of how you do it rather than how much or when.
At the beginning, just starting to greet people with their name can be a step forward. I think in most cultures that's an appropriate thing to do, even if not everyone does it.
I would also add that I'm from Germany, so my cultural background is a bit different than the American one.
Be judicious, and name drop with one level of indirection. "That's sort of what like John was saying earlier I believe yada yada."
In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.
So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.
Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.
This was a big realization for me personally:
If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".
Also, Succeed Socially is a good resource.
Another tool to achieve likeability is to consistently project positive emotions and create the perception that you are happy and enjoying the interaction. The quickest way to make someone like you is to create the perception that you like them because they make you happy - this is of course much easier if you genuinely do enjoy social interactions.
It is very good advice to care about other people.
I'd like to add that I think it is common for the insecure to do this strategy in the wrong way. "Showing off" by is a failure mode, but "people pleaser' can be a failure mode as well - it's important that making others happy doesn't come off as a transaction in exchange for acceptance.
"Look how awesome I am and accept me" vs "Please accept me, I'll make you happy" vs "I accept you, you make me happy".
Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.
Do what your comparative advantage is.
Thank you, so very much.
I often forget that there are different ways to optimize, and the method that feels like it offers the most control is often the worst. And the one I usually take, unfortunately.
I second what gothgirl said; but in case you were looking for more concrete advice:
At least, that's what worked for me when I was younger. Especially 1 actually, I think it helped with 3.
You can be self-centered and not act that way. If you even pretend to care about most people's lives they will care more about yours.
If you want to do this without being crazy bored and feeling terrible, I recommend figuring out conversation topics of other people's lives that you actually enjoy listening people talk about, and also working on being friends with people who do interesting things. In a college town, asking someone their major is quite often going to be enjoyable for them and if you're interested and have some knowledge of a wide variety of fields you can easily find out interesting things.
Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.
What do you want to be more rational about?
Reading "Diaminds" holds the promise to be on the track of making me a better rationalist, but so far I cannot say that with certainty, I'm only at the second chapter (source: recommendation here on LW, also the first chapter is dedicated to explaining the methodology, and the authors seem to be good rationalists, very aware of all the involved bias).
Also "dual n-back training" via dedicated software improves short term memory, which seems to have a direct impact on our fluid intelligence (source: vaguely remembered discussion here on LW, plus the bulletproofexec blog).
Attend a CFAR workshop!
I think many people would find this advice rather impractical. What about people who (1) cannot afford to pay USD3900 to attend the workshop (as I understand it, scholarships offered by CFAR are limited in number), and/or (2) cannot afford to spend the time/money travelling to the Bay Area?
We do offer a number of scholarships. If that's your main concern, apply and see what we have available. (Applying isn't a promise to attend). If the distance is your main problem, we're coming to NYC and you can pitch us to come to your city.
The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?
The sun.
Almost all the available energy on Earth originally came from the Sun; the only other sources I know of are radioactive elements within the Earth and the rotation of the Earth-Moon system.
So even if it's not from the sun's current output, it's probably going to be from the sun's past output.
Hydrogen for fusion is also available on the Earth and didn't come from the Sun. We can't exploit it commercially yet, but that's just an engineering problem. (Yes, if you want to be pedantic, we need primordial deuterium and synthesized tritium, because proton-proton fusion is far beyond our capabilities. However, D-T's ingredients still don't come from the Sun.)
To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.
If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?
Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.
I had the same worry/question when I first found LW. After meeting with all the "important" people (Anna, Luke, Eliezer...) in person, I can confidently say: no, nobody is carrying around a sheet of paper and doing actual Bayesian updating. However, most people in these circles notice when they are surprised/confused, act on that feeling, and if they were wrong, then they update their believes, followed soon by their actions. This could happen from one big surprise or many small ones. So there is a very intuitive sort of Bayesian updating going on.
I suspect very little, but this does remind me of Warren Buffett speaking on Discounted Cash Flow calculations.
For quick background, an investment is a purchase of a future cash flow. Cash in the future is worth less to you than cash right now, and it is worth less and less as you go further into the future. Most treatments pretend that the proper way to discount the value of cash in the future is to have a discount rate (like 5% or 10% per year) and apply it as an exponential function to future cash.
Warren Buffett, a plausible candidate for the most effective investor ever (or at least so far), speaks highly of DCF (discounted cash flow) as the way to choose between investments. However, he also says he never actually does one other than roughly in his head. Given his excellent abilities at calculating in his head, I think it would translate to something like he never does a DCF calculation that would take up more than about 20 lines in an excel spreadsheet.
There are a broad range of policies that I have that are based on math: not gambling in Las Vegas because it's expectation value is negative (although mostly I trust the casinos to have set the odds so payouts are negative, I don't check their math). Not driving too far for small discounts (expense of getting discount should not exceed value of discount). Not ignoring a few thousand dollar difference in a multi-hundred thousand dollar transaction because "it is a fraction of a percent."
I do often in considering hiring a personal service compare paying for it to how long it would take me to do the job vs how long I would need to work at my current job to hire the other person. I am pretty well paid so this does generally lead to me hiring a lot of things done. A similar calcuation does lead me to systematically ignore costs below about $100 for a lot of things which still "feels" wrong, but which I have not yet been able to do a calculation that shows me it is wrong.
I am actually discouraging my wife and children from pushing my children towards elite colleges and universities on the basis that they are over-priced for what they deliver. I am very unconfident in this one as rich people that I respect continue to just bleed money into their children's educations. SO I am afraid to go independent of them even as I can't figure out a calculation that shows what they are doing makes economic sense.
I do look at, or caculate, the price per ounce in making buying decisions, I guess that is an example of a common bayesian calculation.
In terms of return on investment, elite colleges seem to be worthwhile. Read this; more coverage on OB and the places linked from there. It's a bit controversial but my impression was that you'd probably be better off going someplace prestigious. Credit to John_Maxwell_IV for giving me those links, which was most of the reason I'm going to a prestigious college instead of an average one. I'm extremely doubtful about the educational value of education for smart people, but the prestige still seems to make it worth it.
These obviously become a really good deal if you can get financial aid, and many prestigious places do need-blind admissions, i.e. they will give you free money if you can convince them you're smart. Also look at the ROIs of places you're considering. Value of information is enormous here.
It's quite hard to do a full estimate of the benefits you get from going to an elite college. There are a lot of intangibles and a lot of uncertainty -- consider e.g. networking potential or the acquisition of good work habits (smart students at mediocre places rapidly become lazy).
Even if you restrict yourself to the analysis of properly discounted future earning potential (and that's a very limited approach), the uncertainties are huge and your error bars will be very very wide.
I generally go by the "get into the best school you can and figure out money later" guideline :-)
It depends on what your kids want to do. Elite colleges are not selling education, except to the extent that they have to maintain standards to keep their position. They are selling networking cachet. Which is of very high value to people who want to be one of the masters of the universe, and take their chances with the inbound guillotine. If your kids want to be doctors. engineers or archaeologists.. no, not worth the price tag. In fact, the true optium move is likely to ship them to Sweden with a note telling them to find a nice girl, naturalize via marriage and take the free ride through stockholm university. ;)
I agree with much of what you're saying. I make similar back of the envelope calculations.
One small point of clarity is that "money is worth less in the future" is not a general rule but a function of inflation which is affected strongly by national monetary policy. While it likely won't change in the USA in the near future, it COULD, so I think it's important to recognize that and be able to change behavior if necessary.
Lots of people attend an elite college because of signalling, not because it's an investment. Keep questioning the value of such an education!
I'm sorry I didn't explain that well enough. WHat I meant is money you are going to get in the future is not worth as much as money you are going to get now. Even if we work with inflationless dollars, this is true. It happens because the sooner you have a dollar, the more options you have as to what to do with it. So if I know I am going to get a 2013 dollar in 2023, that is worth something to me because there are things I will want to do in the future. But would I pay a dollar know to get a 2013 dollar in 2023? Definitely not, I would just keep my dollar. Would I pay 80 cents? 50 cents? I would certainly pay 25 cents, and might pay 50 cents. If I paid 50 cents, I would be estimating that the things I might do with 50 cents between 2013 and 2023 are about equal in value to me, right now, with the current value I would place on the things I might do with a $1 in 2023 or later. The implicit discount then for 10 years is 50% if I am willing to pay 50 cents now for a 2013 $1 in 2023. The discount rate, assuming exponential change in time as all interest rate calculations do, is about 7%. Note this is a discount in real terms, as it is a 2013 $1 value I will receive in 2023. In principle, if inflation had accumulated 400% by 2023, I would actually be receiving $5 2023 dollars for a 26%/year nominal return on my initial investment, even though I have only a 7%/year real return and 21%/year inflation.
That is only partially true. The time value of money is a function not only of inflation, but of other things as well, notably the value of time (e.g. human lives are finite) and opportunity costs.
In fact, one of the approaches to figuring out the proper discounting rate for future cash flows is to estimate your opportunity costs and use that.
I'd be alarmed if anyone claimed to accurately numerically update their priors. Non-parametric Bayesian statistics is HARD and not the kind of thing I can do in my head.
I never do this. See this essay by gwern for an example of someone doing this.
Nope, not for everyday decisions. For me "remember to update" is more of a mantra to remember to change your mind at all - especially based on several pieces of weak evidence, which normal procedure would be to individually disregard and thus never change your mind.
I only literally do an expected outcome calculation when I care more about having numbers than I do about their validity, or when I have unusually good data and need rigor. Most of the time the uncertainties in your problem formulation will dominate any advantage you might get from doing actual Bayesian updates.
The advantage of the Bayesian mindset is that it gives you a rough idea of how evidence should affect your subjective probability estimate for a scenario, and how pieces of evidence of different strengths interact with each other. You do need to work through a reasonable number of examples to get a feel for how that works, but once you have that intuition you rarely need to do the math.
When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?
For the narrow subset of technical questions, How to Ask Questions the Smart Way is useful.
But if you don't have a problem to begin with -- if your aim is "learn more in field X," it gets more complicated. Given that you don't know what questions are worth asking, the best question might be "where would I go to learn more about X" or "what learning material would you recommend on the subject of X?" Then in the process of following and learning from their pointer, generate questions to ask at a later date.
There may be an inherent contradiction between wanting nonspecific knowledge and getting useful, specific answers.
I don't think an answer has to be specific to be useful. Often just understanding how an expert in a certain area thinks about the world can be useful even if you have no specificity.
When it comes to questions: 1) What was the greatest discovery in your field in the last 5 years? 2) Is there an insight in your field that obvious to everyone in your field but that most people in society just don't get?
My favorite question comes from The Golden Compass:
I haven't employed it against people yet, though, and so a better way to approach the issue in the same spirit is to describe your situation (as suggested by many others).
Don't ask questions. Describe your problem and goal, and ask them to tell you what would be helpful. If they know more than you, let them figure out the questions you should ask, and then tell you the answers.
Lawyer's perspective:
People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:
Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.
In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.
The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said "I was maliciously and negligently injured by that person's driving. I want them in prison." At that point, my response needs to detangle a lot of confusions before I can say anything useful.
I see you beat me to it. Yes, define your problem and goals.
The really bad thing about asking questions is that people will answer them. You ask some expert "How do I do X with Y?". He'll tell you. He'll likely wonder what the hell you're up to in doing such a strange thing with Y, but he'll answer. If he knew what your problem and goals were instead, he'd ask the right questions of himself on how to solve the problem, instead of the wonrg question that you gave him.
Also in the event you get an unusually helpful expert, he might point this out. Consider this your lucky day and feel free to ask follow up questions. Don't be discouraged by the pointing out being phrased along the lines of "What kind of idiot would want to do X with Y?"
Start by asking the wrong ones. For me, it took a while to notice when I had even a stupid question to ask (possibly some combination of mild social anxiety and generally wanting to come across as smart & well-informed had stifled this impulse), so this might take a little bit of practice.
Sometimes your interlocutor will answer your suboptimal questions, and that will give you time to think of what you really want to know, and possibly a few extra hints for figuring it out. But at least as often your interlocutor will take your interest as a cue that they can just go ahead and tell you nonrelated things about the subject at hand.
I find "How do I proceed to find out more about X" to give best results. Note: it's important to phrase it so that they understand you are asking for an efficient algorithm to find out about X, not for them to tell you about X!
It works even if you're completely green and talking to a prodigy in the field (which I find to be particularly hard). Otherwise you'll get "RTFM"/"JFGI" at best or they will avoid you entirely at worst.
What do you want to learn more about? If there isn't an obvious answer, give yourself some time to see if an answer surfaces.
The good news is that this is the thread for vague questions which might not pan out.
One approach: Think of two terms or ideas that are similar but want distinguishing. "How is a foo different from a bar?" For instance, if you're looking to learn about data structures in Python, you might ask, "How is a dictionary different from a list?"
You can learn if your thought that they are similar is accurate, too: "How is a list different from a for loop?" might get some insightful discussion ... if you're lucky.
Of course, if you know sufficiently little about the subject matter, you might instead end up asking a question like
"How is a browser different from a hard drive?"
which, instead, discourages the expert from speaking with you (and makes them think that you're an idiot).
I think that would get me to talk with them out of sheer curiosity. ("Just what kind of a mental model could this person to have in order to ask such a question?")
Sadly, reacting in such a way generally amounts to grossly overestimating the questioner's intelligence and informedness. Most people don't have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to
"I have absolutely no idea what's going on. Here's something that sounds like a question, but understand that I probably won't even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you'll have to go way back to the beginning and take it real slow."
(Source: years of working in computer retail and tech support.)
Most people do have mental models in the sense the word get's defined in decision theory literature.
Even "it's a mysterious black box that might work right if I keep smashing the buttons at random" is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.
This might sound like I'm just being pedantic, but it's also that I find "most people are stupid and have literally no mental models of computers" to be a harmful idea in many ways - it equates a "model" with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn't have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.
Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer. It's likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.
If people don't have any mental model in which to fit information they will ignore the information.
I think...this might actually be a possible mechanism behind really dumb computer users. I'll have to keep it in mind when dealing with them in future.
Comparing to Achmiz above:
Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.
Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.
A while ago I was reading a book about mental models. It investigates how people deal with the question: "You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?"
The "correct answer" is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.
A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don't contract.
If you simply tell such a person the correct solution they won't remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.
As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.
People black out information that doesn't fit into their models of the world. This can lead to some interesting social engieering results.
A lot of magic tricks are based on faulty mental models by the audience.
Which book was that? Would you recommend it in general?
This reminds me of the debate in philosophy of mind between the "simulation theory" and the "theory theory" of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I'm wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by "simulating" other people "in hardware", as it were.
It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.
If this were true, then what Kaj_Sotala said —
would be false.
Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation... but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.
Simulations are models. They allow us to make predictions about how something behaves.
Fair enough. Pedantry accepted. :) I especially agree with the importance of recognizing vague implicit "folk models".
However:
Most such people are. (Actually, most people are, period.)
Believe you me, most people who ask questions like the one I quote are stupid.
I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)
I actually have a book on exactly this subject: Absinthe and Flamethrowers. The author's aim is to show you ways to take real but controllable risks.
I can't vouch for its quality since I haven't read it yet, but it exists. And, y'know. Flamethowers.
Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.
Going for the feeling without the actual downside? Play video games MMPRPGs. Shoot zombies until they finally overwhelm you. Shoot cops in vice city until the army comes after you. Jump out of helicopters.
I really liked therufs suggestion list below. The downside, the thing you are risking in each of these, doesn't actually harm you, it makes you stronger.
Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.
Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...
Climb a tree.
Another transport one: if you regularly go to the same place, experiment with a different route each time.
Try some exposure therapy to whatever it is you're often afraid of. Can't think of what you're often afraid of? I'd be surprised if you're completely immune to every common phobia.
When you go out to eat with friends, randomly choose who pays for the meal. In the long run this only increases the variance of your money. I think it's fun.
This is likely to increase the total bill, much like how splitting the check evenly instead of strictly paying for what you ordered increases the total bill.
Assign the probabilities in proportion to each person's fraction of the overall bill. Incentives are aligned.
I haven't observed this happening among my friends. Maybe if you only go out to dinner with homo economicus...
This is called the unscrupulous diner's dilemma, and experiments say that not only do people (strangers) respond to it like homo economicus, their utility functions seem to not even have terms for each other's welfare. Maybe you eat with people who are impression-optimizing (and mathy, so that they know the other person knows indulging is mean), and/or genuinely care about each other.
From where? I'd expect it to depend a lot on how customary it is to split bills in equal parts in their culture.
How often do you have dinner with strangers?
Also, order your food and or drinks at random.
Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?
In About Behaviorism (which I unfortunately don't currently own a copy of, so I can't give direct quotes or citations) , B. F. Skinner makes the case that the "Willpower" phenomenon actually reduces to opperant conditioning and scheduals of reinforcement. Skinner claims that people who have had their behavior consistently reinforced in the past will become less sensitive to a lack of reinforcement in the present, and may persist in behavior even when positive reinforcement isn't forthcoming in the short term, whereas people whose past behavior has consistantly failed to be reinforced (or even been actively punished) will abandon a course of action much more quickly when it fails to immediately pay off. Both groups will eventually give up at an unreinforced behavior, though the former group will typically persist much longer at it than the latter. This gives rise to the "willpower as resource" model, as well as the notion that some people have more willpower than others. Really, people with "more willpower" have just been conditioned to wait longer for their behaviors to be reinforced.
I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:
Elsewhere in the book (I forget where) he also notes that the easiest explanation for people to go low on willpower when hungry is simply that a situation where your body urgently needs food is a situation where your brain considers everything that’s not directly related to acquiring food to have a very high opportunity cost. It seems like a more elegant and realistic explanation than saying the common folk-psychological explanation that seems to suggest something like willpower being a resource that you lose when you’re hungry or tired. It’s more of a question of the evolutionary tradeoffs being different when you’re hungry or tired, which leads to different cognitive costs.
I now plan to split up long boring tasks into short tasks with a little celebration of completion as the reward after each one. I actually decided to try this after reading Don't Shoot the Dog, which I think I saw recommended on Less Wrong. It's got me a somewhat more productive weekend. If it does stop helping, I suspect it would be from the reward stopping being fun.
I would assume that thinking does take calories, and so does having an impulse and then overriding it.
Kurzban on that:
Footnotes:
Cited references:
But what's the explanation for people to go low on willpower after exerting willpower?
My reading of the passage Kaj_Sotala quoted is that the brain is decreasingly likely to encourage exerting will toward a thing the longer it goes without reward. In a somewhat meta way, that could be seen as will power as a depletable resource, but the reward need not adjust glucose levels directly.
I never suspected it had anything to do with glucose. I'd guess that it's something where people with more willpower didn't do as well in the ancestral environment, since they did more work than strictly necessary, so we evolved to have it as a depletable resource.
It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.
Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.
However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).
The approach of Leverage Research is more like (b).
We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:
I had no idea that Herbert's Butlerian Jihad might be a historical reference.
Wow, I've read Dune several times, but didn't actually get that before you pointed it out.
It turns out that there's a wikipedia page.
I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.
This calls for a calculation. How hard creating an FAI would have to be to have this inequality reversed?
When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.
But is it really necessary to persuade everyone, or 99.9% of the planet? If gwern's analysis is correct (I have no idea if it is) then it might suffice to convince the policymakers of a few countries like USA and China.
I see. So you do have an upper bound in mind for the FAI problem difficulty, then, and it's lower than other alternatives. It's not simply "shut up and do the impossible".
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Your tone reminded me of super religious folk who are convinced that, say "Jesus is coming back soon!" and it'll be "totally awesome".
That's nice.
Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.
There's a third alternative, though it's quite unattractive: damaging civilization to the point that AI is impossible.
My impression of Eliezer's model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).
CFCs and global warming don't seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.
What would a law against developing strong AI look like?
I've suggested in the past that it would look something like a ban on chips more powerful than X teraflops/$.
How close are we to illicit chip manufacturing? On second thought, it might be easier to steal the chips.
Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it's a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.
* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.
My notion of nanotech may have some fantasy elements-- I think of nanotech as ultimately being able to put every atom where you want it, so long as the desired location is compatible with the atoms that are already there.
I realize that chip fabs keep getting more expensive, but is there any reason to think this can't reverse?
The usual advice on how to fold a t-shirt starts with the assumption that your t-shirt is flat, but I'm pretty sure that getting the shirt flat takes me longer than folding it. My current flattening method is to grab the shirt by the insides of the sleeves to turn it right-side out, then grab the shoulder seams to shake it flat. Is there anything better?
I agree about the sleeves, but I get much better results if I grab it at the bottom to shake it out. Ideally, there are seams coming straight down the sides from the armpits; I hold it where they meet the bottom hem. Note that whether you shake from the shoulder seams or from the bottom, one hand will already be in the proper position from turning the sleeves inside it; it's just a question of which one.
I also fold the shirt while standing, so I never actually need to lay it flat. There is a standing-only variation of the method that you cited, although I actually use a different method that begins from precisely the position that I'm in when I leave off the shaking.
In fact, the idea of actually laying something flat before folding strikes me as a greater source of inefficiency than anything else being discussed here. With practice, you can even fold bedsheets in the air.
I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.
I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.
If no question you ask is ever considered stupid, you're not checking enough of your assumptions.
In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?
I think that problem is more commonly referred to as Parfit's hitchhiker.
I think it varies somewhat. Normally, you do have to one box, but I like the problem more when there's some probability of two boxing and still getting the million. That way, EDT tells you to two box, rather than just getting an undefined expected utility given that you decide to two box and crashing completely.
Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI
Sees thread title
Perfect.
So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.
Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.
It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:
Not a core competency of the sort of people LW attracts.
Rewards not as immediate as the sort of epiphany porn that some of LW generates.
Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.
Because it's hard. That's what kept me from doing it.
I am very close to explicitly starting a project to do just that, and didn't get to this point even until one of my powerful friends explicitly advised me to take a particular strategy to get relationships with more powerful people.
I find myself unable to be motivated to do it without calling it "Networking the Hard Way", to remind myself that yes, it's hard, and that's why it will work.
I would be interested in hearing about this strategy if you feel like sharing.
Not done much on it yet, but here's the plan.
Thanks for sharing. Tell me if you want me to bug you about whether you're following your plan at scheduled points in the future.
Thanks for the offer. It feels great when people make such offers now, because I no longer need that kind of help, which is such a relief. I use Beeminder now, which basically solves the "stay motivated to do quantifiable goal at some rate" problem.
Soon. Would rather actually do it first, before reporting on my glorious future success.
Mmhmm, good catch. Thanks.
Insofar as MIRI folk seem to be friends with Jaan Tallin and Thiel etc. they appear to be trying to do this, though they don't seem to be teaching it as a great idea. But organizationally, if you're trying to optimize the world in a more rational way, spreading rationality might be a better way than trying to befriend less rational powerful people. Obviously this is less effective on a more personal basis.
Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.
Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.
Power isn't one dimensional. The thing that matters isn't so much to make relationships with people who are more powerful than you in all domains but to make relationship with people who are poweful in some domain where you could ask them for help.
Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty? His ethos consisted of S.M.I**2.L.E, i.e. Space Migration + Intelligence Increase + Life Extension which seems like it should be right up your alley to me. His books are not well-organized; his live presentations and tapes had some wide appeal.
Leary won me over with those goals. I have adopted them as my own.
It's the 8 circuits and the rest of the mysticism I reject. Some of it rings true, some of it seems sloppy, but I doubt any of it is useful for this audience.
Write up a discussion post with an overview of what you think we'd find novel :)
I am generally surprised when people say things like "I am surprised that topic X has not come up in forum / thread Y yet." The set of all possible things forum / thread Y could be talking about is extremely large. It is not in fact surprising that at least one such topic X exists.
I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?
I'd like to ask an even stupider one: why do people want to talk to strangers?
I've had a few such conversations on trains and the like, and I'm not especially averse to it, but I think afterwards, what was the point of that?
Well, that passed the time.
It would have passed anyway.
Yes, but not as quickly.
At least the train eventually arrives.
To meet new people?
I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.
In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.
I think the question is badly formed. I think it's better to ask: "How do I become a person who easily talks to strangers?" When you are in your head and think about: "How do I talk to that person over there?" you are already at a place that isn't conductive to a good interaction.
Yesterday during the course of traveling around town three stangers did talk to me, where the stranger said the first word.
The first was a woman in mid 30s with a bicycle who was searching the elevator at the public train station. The second was an older woman who told me that the Vibriam Fivefinger shoes in wearing look good. The third was a girl who was biking next to me when her smart phone felt down. I picked it up and handed it back to her. She said thank you.
I'm not even counting beggars at the train in public transportation.
Later that evening I went Salsa dancing. There two woman I didn't know who were new to Salsa asked me to dance.
Why was I at a vibe that let's other people approach me? I spent five days at a personal development workshop given by Danis Bois. The workshop wasn't about doing anything to strangers but among other things teaches a kind of massage and I was a lot more relaxed than I was in the past.
If you get rid of your anxiety interactions with strangers start to flow naturally.
What can you do apart from visiting personal development seminars that put you into a good emotional state?
Wear something that makes it easy for strangers to start a conversation with you. One of the benefits of Vibriam Fivefingers is that people are frequently curious about them.
Do good exercises.
1) One exercise is to say 'hi' or 'good morning' to every stranger you pass. I don't do it currently but it's a good exercise to teach yourself that interaction with strangers is natural.
2) Learn some form of meditation to get into a relaxed state of mind.
3) If you want to approach a person at a bar you might feel anxiety. Locate that anxiety in your body. At the beginning it makes sense to put your hand where you locate it.
Ask yourself: "Where does that feeling want to move in my body" Tell it to "soften and flow". Let it flow where it wants to flow in your body. Usually it wants to flow at a specific location out of your body.
Do the same with the feeling of rejection, should a stranger reject you.
Exercise three is something that I only learned recently and I'm not sure if I'm able to explain it well over the internet. In case anybody reading it finds it useful I would be interested in feedback.
I recently found a nice mind hack for that: “What would my drunken self do?”
o.O
Sure she wasn't being sarcastic? ;-)
In this case yes, because of the bodylanguage and the vibe in which the words were said.
If a person wants to make you a compliement, wearing an item that extraordinary makes it easy for the other person to start a conversation.
I also get frequently asked where I brought my Vibriams.