You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

"Stupid" questions thread

40 Post author: gothgirl420666 13 July 2013 02:42AM

r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well. 

Comments (850)

Comment author: Carinthium 27 January 2014 11:26:57AM 0 points [-]

Given Elizier Yudowksy's Peggy Sue parody, is there anything inherent innane about the Peggy Sue fanfic type? If so, I've missed it- what is it?

Comment author: drethelin 10 April 2014 12:31:00AM 0 points [-]

It's similar to the Mary Sue problem and Eliezer's rule of fanfiction. If you give Harry Potter knowledge of the future, you have to give it to Voldemort too, but that can render both of their sets of knowledge irrelevant and become a self referential clusterfuck to write so probably just don't. If only the main character has knowledge of the future, it tends to become a Mary Sue fic where you replace power with knowledge.

https://www.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of-Time is an example of a surprisingly well-written story that takes recursive time-travel and future knowledge to its logical conclusion.

Comment author: Daemon 17 September 2013 12:13:21AM 0 points [-]

How do you deal with Munchhausen trilemma? It used to not bother me much, and I think my (axiomatic-argument based) reasoning was along the lines of "sure, the axioms might be wrong, but look at all the cool things that come out of them." The more time passes, though, the more concerned I become. So, how do you deal?

Comment author: OnTheOtherHandle 31 July 2013 04:17:08AM *  0 points [-]

I have a question about the first logic puzzle here. The condition "Both sane and insane people are always perfectly honest, sane people have 100% true beliefs while insane people have 100% false beliefs" seems to be subtly different from Liar/Truth-teller. The Liar/Truth-teller thing is only activated when someone asks them a direct yes or no question, while in these puzzles the people are volunteering statements on their own.

My question is this: if every belief that an insane person holds is false, then does that also apply to beliefs about their beliefs? For example, an insane person may believe the sky is not blue, because they only believe false things. But does that mean that they believe they believe that the sky is blue, when in fact they believe that it is not blue? So all their meta-beliefs are just the inverse of their object-level beliefs? If all their beliefs are false, then their beliefs about their beliefs must likewise be false, making their meta-beliefs true on the object level, right? And then their beliefs about their meta-beliefs are again false on the object level?

But if that's true, it seems like the puzzle becomes too easy. Am I missing something or is the answer to that puzzle "Vs lbh jrer gb nfx zr jurgure V nz n fnar cngvrag, V jbhyq fnl lrf"?

Edit: Another thought occurred to me about sane vs. insane - it's specified that the insane people have 100% false beliefs, but it doesn't specify that these are exact negations of true beliefs. For example, rather than believing the sky is not-blue, an insane person might believe the sky doesn't even exist and his experience is a dream. For example, what would happen if you asked an insane patient whether he was a doctor? He might say no, not because he knew he was a patient but because he believed himself to be an ear of corn rather than a doctor.

Comment author: djm 27 July 2013 03:35:29PM 1 point [-]

Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:

  1. What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.

  2. Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI

Comment author: therufs 24 July 2013 07:18:00PM -1 points [-]

If I am interested in self-testing different types of diets (paleo, vegan, soylent, etc.), how long is a reasonable time to try each out?

I'm specifically curious about how a diet would affect my energy level and sense of well-being, how much time and money I spend on a meal, whether strict adherence makes social situations difficult, etc. I'm not really interested in testing to a point that nutrient deficiencies show up or to see how long it takes me to get bored.

Comment author: Lumifer 24 July 2013 07:42:36PM *  0 points [-]

I'd say about a month. I would expect that it takes your body 1-2 weeks to adjust its metabolism to the new diet and then you have a couple of weeks to evaluate the effects.

Comment author: CAE_Jones 21 July 2013 03:14:47AM 0 points [-]

I'd like to work on a hardware project. It seems rather simple (I'd basically start out trying to build this ( pdf / txt )), however, my lack of vision makes it difficult to just go check the nearest Radioshack for parts, and I'm also a bit concerned about safety issues (how easy would it be for someone without an electrical engineering background to screw up the current? Could I cause my headphone jack to explode? Etc). I'm mostly wondering how one should go about acquiring parts for DIY electronics, provided that travel options are limited. (I've done some Googling, but am uncertain on what exactly to look for. The categories "transformer" / "amplifier" / "electrode" / "insulator" are quite broad.)

Comment author: OnTheOtherHandle 21 July 2013 02:27:03AM 1 point [-]

I'd like to use a prediction book to improve my calibration, but I think I'm failing at a more basic step: how do you find some nice simple things to predict, which will let you accumulate a lot of data points? I'm seeing predictions about sports games and political elections a lot, but I don't follow sports and political predictions both require a lot of research and are too few and far between to help me. The only other thing I can think of is highly personal predictions, like "There is a 90% chance I will get my homework done by X o'clock", but what are some good areas to test my prediction abilities on where I don't have the ability to change the outcome?

Comment author: gwern 21 July 2013 03:52:29AM 2 points [-]

Start with http://predictionbook.com/predictions/future

Predictions you aren't familiar with can be as useful as ones you are: you calibrate yourself under extreme uncertainty,and sometimes you can 'play the player' and make better predictions that way (works even with personal predictions by other people).

Comment author: linkhyrule5 21 July 2013 02:09:24AM 2 points [-]

Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.

Comment author: FiftyTwo 21 July 2013 12:13:57AM 1 point [-]

Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.

Comment author: gwern 21 July 2013 12:42:21AM 3 points [-]

Not that I know of. The only current candidate is to take psilocybin to increase Openness, but the effect is relatively small, it hasn't been generalized outside the population of "people who would sign up to take psychedelics", and hasn't been replicated at all AFAIK (and for obvious reasons, there may never be a replication). People speculate that dopaminergic drugs like the amphetamines may be equivalent to an increase in Conscientiousness, but who knows?

Comment author: OnTheOtherHandle 20 July 2013 06:31:58PM 0 points [-]

I debated over whether to include this in the HPMOR thread, but it's not specific to that story, and, well, it is kind of a stupid question.

How does backwards-only time travel work? Specifically, wouldn't a time traveler end up with dozens of slightly older or younger versions of herself all living at the same time? I guess "Yes" is a perfectly acceptable answer, but I've just never really seen the consequences addressed. I mean, given how many times Harry has used the Time Turner in HPMOR (just a convenient example), I'm wondering if there are like 13 or 14 Harries just running around acting independently? Because with backwards-only time travel, how is there a stable loop?

Think about a situation with a six-hour Time Turner and three versions of the same person: A, A' (three hours older than A), and A'' (three hours older than A'). Let's say A' gets to work and realizes he forgot his briefcase. If he had a backwards and forwards time machine, he could pop into his home three hours ago and be back in literally the blink of an eye - and because he knows he could do this, he should then expect to see the briefcase already at his desk. Sure enough, he finds it, and three hours later he becomes A'', and goes back to plant the briefcase before the meeting. This mostly makes sense to me, because A'' would plant the briefcase and then return to his own time, through forwards time travel, rather than the slow path. A'' would never interact with A', and every version of A to reach the point of the meeting would be locked deterministically to act exactly as A' and A'' acted.

But I'm really confused about what happens if A has a Time Turner, that can go backwards but not forwards. Then, when A' realizes he forgot his briefcase, wouldn't there actually be two ways this could play out?

One, A' finds the briefcase at his desk, in which case three hours later, he would become A'' and then come back to plant the briefcase. But what does A'' do after he plants the briefcase? Can he do whatever he wants? His one job is over, and there's another version of him coming through from the past to live out his life - could A'' just get up and move to the Bahamas or become a secret agent or something, knowing that A' and other past versions would take care of his work and family obligations? Isn't he a full-blown new person that isn't locked into any kind of loop?

Two, A' doesn't find the briefcase at his desk, in which case he goes back three hours to remind A to take his briefcase - does that violate any time looping laws? A' never had someone burst in to remind him to take a briefcase, but does that mean he can't burst in on A now? A' can't jump back to the future and experience firsthand the consequences of having the briefcase. If he goes back to talk to A, isn't this just the equivalent of some other person who looks like you telling you not to forget your briefcase for work? Then A can get the briefcase and go to work, while A' can just...leave, right? And live whatever life he wants?

Am I missing something really obvious? I must be, because Harry never stops to consider the consequences of dozens of independently operating versions of himself out there in the world, even when there are literally three other versions of him passed out next to his chair. What happens to those three other Harries, and in general what happens with backwards-only time travel? Is there no need for forwards time travel to "close the circuit" and create a loop, instead of a line?

Comment author: shinoteki 20 July 2013 07:21:28PM 1 point [-]

You don't need a time machine to go forward in time - you can just wait. A'' cant leave everything to A' because A' will disappear within three hours when he goes back to become A''. If A' knows A wasn't reminded the A' can't remind A. the other three Harrys use their time turners to go backwards and close the loop. You do need both forward and backward time travel to create a closed loop, but the forward time travel can just be waiting; it doesn't require a machine.

Comment author: OnTheOtherHandle 20 July 2013 08:13:17PM 0 points [-]

I think I get it, but I'm still a bit confused, because both A' and A'' are moving forward at the same rate, which means since A'' started off older, A' will never really "catch up to" and become A'', because A'' continues to age. A'' is still three hours older than A', right, forever and ever?

To consider a weird example, what about a six hour old baby going back in time to witness her own birth? Once the fetus comes out, wouldn't there just be two babies, one six hours older than the other? Since they're both there and they're both experiencing time at a normal forward rate of one second per second, can't they just both grow up like siblings? If the baby that was just born waited an hour and went back to witness her own birth, she would see her six hour older version there watching her get born, and she would also see the newborn come out, and then there'd be three babies, age 0, age six hours, and age twelve hours, right?

How exactly would the "witnessing your own birth" thing play out with time travel? I think your explanation implies that there will never be multiple copies running around for any length of time, but why does A'' cease to exist once A' ages three hours? A'' has also aged three hours and become someone else in the meantime, right?

Comment author: shinoteki 20 July 2013 08:29:38PM 3 points [-]

A' doesn't become A'' by catching up to him, he becomes A'' when he uses his time machine to jump back 3 hours.

There would be three babies for 6 hours, but then the youngest two would use their time machines and disappear into the past.

A'' doesn't cease to exist. A' "ceases to exist" because his time machine sends him back into the past to become A''.

Comment author: OnTheOtherHandle 21 July 2013 12:39:49AM *  2 points [-]

Oh! Alright, thank you. :) So if you go back and do something one hour in the past, then the loop closes an hour later, when the other version of yourself goes back for the same reasons you did, and now once again you are the only "you" at this moment in time. It's not A' that continues on with life leaving A'' off the hook, it is A'' who moves on while A' must go back. That makes much more sense.

Edit: This means it is always the oldest Harry that we see, right? The one with all the extra waiting around included in his age? Since all the other Harries are stuck in a six hour loop.

Comment author: bogdanb 20 July 2013 05:05:49PM *  1 point [-]

I keep hearing about all sorts of observations that seem to indicate Mars once had oceans (the latest was a geological structure that resembles Earth river deltas). But on first sight it seems like old dried up oceans should be easy to notice due to the salt flats they’d leave behind. I’m obviously making an assumption that isn’t true, but I can’t figure out which. Can anyone please point out what I’m missing?

As far as I can tell, my assumptions are:

1) Planets as similar to Earth as Mars is will have similarly high amounts of salt dissolved in their oceans, conditional on having oceans. (Though I don’t know why NaCl in particular is so highly represented in Earth’s oceans, rather than other soluble salts.)

2) Most processes that drain oceans will leave the salt behind, or at least those that are plausible on Mars will.

3) Very large flat areas with a thick cover of salt will be visible at least to orbiters even after some billions of years. This is the one that seems most questionable, but seems sound assuming:

3a) a large NaCl-covered region will be easily detectable with remote spectroscopy, and 3b) even geologically-long term asteroid bombardment will retain, over sea-and-ocean-sized areas of salt flats, concentrations of salt abnormally high, and significantly lower than on areas previously washed away.

Again, 3b. sounds as the most questionable. But Mars doesn’t look like it its surface was completely randomized to a non-expert eye. I mean, I know the first few (dozens?) of meters on the Moon are regolith, which basically means the surface was finely crushed and well-mixed, and I assume Mars would be similar though to a lesser extent. But this process seems to randomize mostly locally, not over the entire surface of the planet, and the fact that Mars has much more diverse forms of relief seems to support that.

Comment author: CellBioGuy 24 July 2013 01:00:15AM *  3 points [-]

It not just NaCl, its lots of minerals that get deposited as the water they were dissolved in goes away - they're called 'evaporites'. They can be hard to see if they are very old if they get covered with other substances, and mars has had a long time for wind to blow teeny sediments everywhere. Rock spectroscopy is also not nearly as straightforward as that of gases.

One of the things found by recent rovers is indeed minerals that are only laid down in moist environments. See http://www.giss.nasa.gov/research/briefs/gornitz_07/ , http://onlinelibrary.wiley.com/doi/10.1002/gj.1326/abstract .

As for amounts of salinity... Mars probably never had quite as much water as Earth had and it may have gone away quickly. The deepest parts of the apparent Northern ocean probably only had a few hundred meters at most. That also means less evaporites. Additionally a lot of the other areas where water seemed to flow (especially away from the Northern lowlands) seem to have come from massive eruptions of ground-water that evaporated quickly after a gigantic flood rather than a long period of standing water.

Comment author: bogdanb 24 July 2013 11:09:35PM *  0 points [-]

Thank you!

So, basically (3) was almost completely wrong, and (1) missed the fact that “ocean” doesn’t mean quite the same thing everywhere.

Could you explain (2) a little bit? I see in Earth seawater there’s about 15 times more NaCl by mass than other solutes. Is there an obvious reason for that, and is that Earth-specific?

Comment author: CellBioGuy 25 July 2013 02:59:02AM *  2 points [-]

I honestly don't know much about relative salinities of terrestrial versus Martian prospective oceans. I do know however that everywhere that's been closely sampled so far by rovers and landers has had lots of perchlorate (Cl O4) salts in the soil, sometimes up to 0.5% of the mass. This can form when chloride salts react with surrounding minerals under the influence of ultraviolet light... nobody is terribly confident yet about what actually happened there to make them given that these results are new since the Phoenix lander and Spirit and Opportunity, but it's certainly interesting and suggestive.

I also think I should add that there is some evidence that a good chunk of Mars's water went underground - the topography of just about everything within ~30 or 40 degrees of the poles is indicative of crater walls slumping from shifting permafrost and there seems to be plenty of solid water in or under the soil there. The oceans may not have only dried up so long ago, they may have sunk downwards simultaneously.

Comment author: [deleted] 17 July 2013 04:21:00AM 1 point [-]

What can be done about akrasia probably caused by anxiety?

Comment author: FiftyTwo 21 July 2013 12:33:17AM 0 points [-]

Depending on the severity of the anxiety professional intervention may be necessary.

Comment author: wedrifid 17 July 2013 05:39:51AM 2 points [-]

What can be done about akrasia probably caused by anxiety?

  • Exercise.
  • Meditation.
  • Aniracetam.
  • Phenibut.
  • Nicotine.
  • Cerebrolysin.
  • Picamilon.
  • As appropriate, stop exposing yourself to toxic stimulus that is causing anxiety.
  • Use generic tactics that work on most akrasia independent of cause.
Comment author: drethelin 17 July 2013 05:24:48AM 1 point [-]

From what I've seen valium helps to some extent.

Comment author: Jaime 16 July 2013 04:41:26AM 4 points [-]

Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?

Comment author: Jonathan_Graehl 16 July 2013 10:40:23PM 1 point [-]

I think I'm simply lazy.

But I've been able to cultivate caring about particular goals/activities/habits, and then, with respect to those, I'm not so lazy - because I found them to offer frequent or large enough rewards, and I don't feel like I'm missing out on any particular type of reward. If you think you're missing something and you're not going after it, that might make you feel lazy about other things, even while you're avoiding tackling the thing that you're missing head on.

This doesn't answer your question. If I was able to do that, then I'm not just lazy.

Comment author: Qiaochu_Yuan 16 July 2013 08:52:07AM 3 points [-]

Taboo "lazy." What kind of a person are we talking about, and do they want to change something about the kind of person they are?

Comment author: Jaime 16 July 2013 09:14:46AM 1 point [-]

Beyond needing to survive, and maintain a reasonable health, a lazy person can just while their time away and not do anything meaningful (in getting oneself better - better health, better earning ability, learn more skills etc). Is there a fundamental need to also try to improve as a person? What is the rationale behind self improvement or not wanting to do so?

Comment author: Qiaochu_Yuan 16 July 2013 09:00:49PM 2 points [-]

I don't understand your question. If you don't want to self-improve, don't.

Comment author: Jaime 17 July 2013 04:34:05AM 0 points [-]

My question is : can I change this non desire to improve due to laziness? As in, how do I even get myself to want to improve and get my own butt kicked :)

Comment author: OnTheOtherHandle 21 July 2013 02:15:10AM 1 point [-]

Why don't you try starting with the things you already do? How do you spend your free time, typically? You might read some Less Wrong, you might post some comments on forums, you might play video games. Then maybe think of a tiny, little extension of those activities. When you read Less Wrong, if you normally don't think too hard about the problems or thought experiments posed, maybe spend five minutes (or two minutes) by the clock trying to work it out yourself. If you typically post short comments, maybe try to write a longer, more detailed post for every two or three short ones. If you think you watch too much TV, maybe try to cut out 20 minutes and spend those 20 minutes doing something low effort but slightly better, like doing some light reading. Try to be patient with yourself and give yourself a gentle, non-intimidating ramp to "bettering yourself". :)

Comment author: Eugine_Nier 17 July 2013 05:31:52AM 0 points [-]

Well, you want to want to improve. That's a start.

Comment author: Qiaochu_Yuan 17 July 2013 05:19:00AM *  0 points [-]

I still don't understand the question. So you don't want to self-improve but you want to want to self-improve? Why?

Comment author: Jaime 17 July 2013 05:54:56AM 1 point [-]

I want to change the not want to self-improve part since a life lazing around seems pretty meaningless, though I am also pretty contented to be a lazy bum.

Comment author: Qiaochu_Yuan 17 July 2013 06:36:47AM *  1 point [-]

Sorry for reiterating this point, but I still don't understand the question. You seem to either have no reasons or have relatively weak reasons for wanting to self-improve, but you're still asking how to motivate yourself to self-improve anyway. But you could just not. That's okay too. You can't argue self-improvement into a rock. If you're content to be a lazy bum, just stay a lazy bum.

Comment author: OnTheOtherHandle 21 July 2013 02:05:05AM *  4 points [-]

I think it's the difference between wanting something and wanting to want something, just as "belief-in-belief" is analogous to belief. I'm reminded of Yvain's post about the difference between wanting, liking, and approving.

I think I can relate to Jaime's question, and I'm also thinking the feeling of "I'm lazy" is a disconnect between "approving" and either "wanting" or "liking." For example, once I get started writing a piece of dialogue or description I usually have fun. But despite years of trying, I have yet to write anything long or substantial, and most projects are abandoned at less than the 10% mark. The issue here is that I want to write random snippets of scenes and abandon them at will, but want to want to write a novel. Or, to put it another way, I want to have written something but it takes a huge activation energy to get me to start, since I won't reap the benefits until months or years later, if at all.

But here's something that might help - it helped me with regards to exercising, although not (yet) writing or more complex tasks. Think of your motivation or "laziness" in terms of an interaction between your past, present, and future selves. For a long time, it was Present Me blaming Past Me for not getting anything done. I felt bad about myself, I got mad at myself, and I was basically just yelling at someone (Past Me) who was no longer there to defend herself, while taking a very present-centered perspective.

As far as Present Me is concerned, she is the only one who deserves any benefits. Past Me can be retroactively vilified for not getting anything done, and Future Me can be stuck with the unpleasant task of actually doing something, while I lounge around. What helped me may be something unique to me, but here it is:

I like to think of myself as a very kind, caring person. Whether or not that's true isn't as important for our purposes. But the fact of the matter is that my self-identity as a kind, helpful person is much stronger and dearer to me than my self-identity as an intelligent or hard-working or ambitious person, so I tried to think of a way to frame hard work and ambition in terms of kindness. And I hit upon a metaphor that worked for me: I was helping out my other temporal selves. I would be kind to Past Me by forgiving her; she didn't know any better and I'm older. And I would be kind to Future Me by helping her out.

If I were in a team, my sense of duty and empathy would never allow me to dump the most unpleasant tasks on my other teammates. So I tried to think of myself as teaming up with my future self to get things done, so that I would feel the same shame/indignance if I flaked and gave her more work. It even helped sometimes to think of myself in an inferior position, a servant to my future self, who should, after all, be a better and more deserving person than me. I tried to get myself to love Me From Tomorrow more than Me From Today, visualizing how happy and grateful Tomorrow Me will be to see that I finished up the work she thought she would have to do.

It is all a bit melodramatic, I know, but that's how I convinced myself to stop procrastinating, and to turn "approve" into "want." The best way for me, personally, to turn something I approve of but don't want to do into something I genuinely want to do is to think of it as helping out someone else, and to imagine that person being happy and grateful. It gives me some of the same dopamine rush as actually helping out a real other person. The rhetoric I used might not work for you, but I think the key is to see your past, present, and future selves working as a team, rather than dumping responsibility onto one another.

I hope that helps, but I may just be someone who enjoys having an elaborate fantasy life :)

Comment author: Qiaochu_Yuan 21 July 2013 04:28:39PM *  0 points [-]

I understand the distinction between wanting X and wanting to want X in general, but I can't make sense of it in the particular case where X is self-improvement. This is specifically because making yourself want something you think is good is a kind of self-improvement. But if you don't already want to self-improve, I don't see any base case for the induction to get started, as it were.

Comment author: drethelin 17 July 2013 05:33:22PM 2 points [-]

If I'm a lazy bum and mostly content to be a lazy bum I will stay a lazy bum. Any interventions that are not doable by a lazy person will not be done. But if I have even a slight preference for being awesome, and there's an intervention that is fairly easy to implement, I want to do it. Insofar as you'd prefer people who share your values to be NOT lazy bums, you should if possible encourage them to be self-improvers.

Comment author: drethelin 17 July 2013 05:25:10AM 2 points [-]

self-improving people are cooler

Comment author: JoshuaZ 16 July 2013 04:42:35AM 1 point [-]

What do you mean by lazy? How do you distinguish between laziness and akrasia? By lazy do you mean something like "unmotivated and isn't bothered by that" or do you mean something else?

Comment author: Jaime 16 July 2013 04:53:34AM 0 points [-]

More towards the "is there really a need for things to be done, if not, why do it and waste energy". Which is why I am wondering if fighting akrasia will actually work for a lazy person if the meaning for things to be done is not there in the first place.

Comment author: ChristianKl 16 July 2013 08:20:13AM 1 point [-]

Akrasia is about not doing things that you rationally think you should be doing.

What you seem to describe isn't akrasia.

Comment author: CAE_Jones 16 July 2013 08:28:26AM 0 points [-]

It depends what is meant by the need/meaning being there; if system2 concludes something is necessary, but system1 does not, is it akrasia?

Comment author: ChristianKl 16 July 2013 08:35:40AM 1 point [-]

If one system agrees that there need, then there's at least some meaning in the first place.

Comment author: lmnop 15 July 2013 10:35:08PM *  0 points [-]

What are concrete ways that an unboxed AI could take over the world? People seem to skip from "UFAI created" to "UFAI rules the world" without explaining how the one must cause the other. It's not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.

Could someone sketch out a few example timelines of events for how a UFAI could take over the world?

Comment author: Qiaochu_Yuan 16 July 2013 12:52:58AM 2 points [-]

Have you read That Alien Message?

Comment author: lmnop 16 July 2013 09:01:22PM *  3 points [-]

No, but I read it just now, thank you for linking me. The example takeover strategy offered there was bribing a lab tech to assemble nanomachines (which I am guessing would then be used to facilitate some grey goo scenario, although that wasn't explicitly stated). That particular strategy seems a bit far-fetched, since nanomachines don't exist yet and we thus don't know their capabilities. However, I can see how something similar with an engineered pandemic would be relatively easy to carry out, assuming ability to fake access to digital currency (likely) and the existence of sufficiently avaricious and gullible lab techs to bribe (possible).

I was thinking in terms of "how could an AI rule humanity indefinitely" rather than "how could an AI wipe out most of humanity quickly." Oops. The second does seem like an easier task.

Comment author: bramflakes 15 July 2013 11:40:22PM 3 points [-]

If the AI can talk itself out of a box then it demonstrates it can manipulate humans extremely well. Once it has internet access, it can commandeer resources to boost its computational power. It can analyze thousands of possible exploits to access "secure" systems in a fraction of a second, and failing that, can use social engineering on humans to gain access instead. Gaining control over vast amounts of digital money and other capital would be trivial. This process compounds on itself until there is nothing else left over which to gain control.

That's a possible avenue for world domination. I'm sure that there are others.

Comment author: lmnop 16 July 2013 02:41:56PM *  0 points [-]

Worst case scenario, can't humans just abandon the internet altogether once they realize this is happening? Declare that only physical currency is valid, cut off all internet communications and only communicate by means that the AI can't access?

Of course it should be easy for the AI to avoid notice for a long while, but once we get to "turn the universe into computronium to make paperclips" (or any other scheme that diverges from business-as-usual drastically) people will eventually catch on. There is an upper bound to the level of havoc the AI can wreak without people eventually noticing and resisting in the manner described above.

Comment author: bramflakes 16 July 2013 03:12:17PM 3 points [-]

How exactly would the order to abandon the internet get out to everyone? There are almost no means of global communications that aren't linked to the internet in some way.

Comment author: lmnop 16 July 2013 08:34:46PM 0 points [-]

Government orders the major internet service providers to shut down their services, presumably :) Not saying that that would necessarily be easily to coordinate, nor that the loss of internet wouldn't cripple the global economy. Just that it seems to be a different order of risk than an extinction event.

My intuition on the matter was that an AI would be limited in its scope of influence to digital networks, and its access to physical resources, e.g. labs, factories and the like would be contingent on persuading people to do things for it. But everyone here is so confident that FAI --> doom that I was wondering if there was some obvious and likely successful method of seizing control of physical resources that everyone else already knew and I had missed.

Comment author: wwa 15 July 2013 07:00:47PM *  0 points [-]

Is true precommitment possible at all?

Human-wise this is an easy question, human will isn't perfect, but what about an AI? It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit, which means at least one prior was 100% and that in turn means no update is possible for this prior.

Comment author: Qiaochu_Yuan 15 July 2013 07:41:48PM *  1 point [-]

It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit

Why? Of what?

Comment author: D_Malik 15 July 2013 11:53:30PM *  2 points [-]

I think wwa means 100% certainty that you'll stick to the precommitted course of action. But that isn't what people mean when they say "precommitment", they mean deliberately restricting your own future actions in a way that your future self will regret or would have regretted had you not precommitted, or something like that. The restriction clearly can't be 100% airtight, but it's usually pretty close; it's a fuzzy category.

Comment author: CronoDAS 15 July 2013 08:26:15AM *  10 points [-]

I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:

It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.

As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?

Comment author: Kyre 16 July 2013 12:09:30AM 5 points [-]

I have/had this problem. My computer and shelves are full of partially completed (or, more realistically, just-begun) projects.

So, what I'm doing at the moment is I've picked one of them, and that's the thing I'm going to complete. When I'm feeling motivated, that's what I work on. When I'm not feeling motivated, I try to do at least half an hour or so before I flake off and go play games or work on something that feels more awesome at the time. At those times my motivation isn't that I feel that the project is worthwhile, it is that having gone through the process of actually finishing something will be have been worthwhile.

It's possible after I'm done I may never put that kind of effort in again, but I will know (a) that I probably can achieve that sort of goal if I want and (b) if carrying on to completion is hell, what kind of hell and what achievement would be worth it.

Comment author: Qiaochu_Yuan 15 July 2013 07:43:11PM 3 points [-]

Beeminder. Record the number of Pomodoros you spend working on the project and set some reasonable goal, e.g. one a day.

Comment author: Error 15 July 2013 04:02:30PM 1 point [-]

Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.

Would it be worthwhile if you could guarantee or nearly guarantee that you will not just give up? If so, finding a way to credibly precommit to yourself that you'll stay the course may help. Beeminder is an option; so is publicly announcing your project and a schedule among people whose opinion you personally care about. (I do not think LW counts for this. It's too big; the monkeysphere effect gets in the way)

Comment author: gothgirl420666 15 July 2013 03:07:52PM 2 points [-]

there's a good chance I'll get frustrated and give up before actually completing the project

Make this not true. Practice doing a bunch of smaller projects, maybe one or two week-long projects, then a month-long project. Then you'll feel confident that your work ethic is good enough to complete a major project without giving up.

Comment author: Larks 15 July 2013 09:43:09AM 12 points [-]

a half-written computer game doesn't run at all

I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.

Comment author: OnTheOtherHandle 25 July 2013 01:59:24AM 2 points [-]

This seems like a really good concept to keep in mind. I wonder if it could be applied to other fields? Could you make a pot that remains a pot the whole way through, even as you refine it and add detail? Could you write a song that starts off very simple but still pretty, and then gradually layer on the complexity?

Your post inspired me to try this with writing, so thank you. :) We could start with a one-sentence story: "Once upon a time, two lovers overcame vicious prejudice to be together."

And that could be expanded into a one-paragraph story: "Chanon had known all her life that the blue-haired Northerners were hated enemies, never to be trusted, that she had to keep her red-haired Southern bloodline pure or the world would be overrun by the blue barbarians. But everything was thrown in her face when she met Jasper - his hair was blue, but he was a true crimson-heart, as the saying went. She tried to find every excuse to hate him, but time and time again Jasper showed himself to be a man of honor and integrity, and when he rescued her from those lowlife highway robbers - how could she not fall in love? Her father hated it of course, but even she was shocked at how easily he disowned her, how casually he threw away the bonds of family for the chains of prejudice. She wasn't happy now, homeless and adrift, but she knew that she could never be happy again in the land she had once called home. Chanon and Jasper set out to unknown lands in the East, where hopefully they could find some acceptance and love for their purple family."

This could be turned into a one page story, and then a five page story, and so on, never losing the essence of the message. Iterative storytelling might be kind of fun for people who are trying to get into writing something long but don't know if they can stick it out for months or years.

Comment author: sediment 21 July 2013 07:33:28PM *  2 points [-]

I submit that this might generalize: that perhaps it's worth, where possible, trying to plan your projects with an iterative structure, so that feedback and reward appear gradually throughout the project, rather than in an all-or-nothing fashion at the very end. Tight feedback loops are a great thing in life. Granted, this is of no use for, for example, taking a degree.

Comment author: CAE_Jones 15 July 2013 10:44:17AM 4 points [-]

In fact, the games I tend to make progress on are the ones I can get testable as quickly as possible. Unfortunately, those are usually the least complicated ones (glorified MUDs, an x axis with only 4 possible positions, etc).

I do want to do bigger and better things, then I run into the same problem as CronoDAS. When I do start a bigger project, I can sometimes get started, then crash within the first hour and never return. (In a couple extreme cases, I lasted for a good week before it died, though one of these was for external reasons). Getting started is usually the hardest part, followed by surviving until there's something work looking back at. (A functioning menu system does not count.)

Comment author: Raiden 14 July 2013 10:45:55PM 4 points [-]

My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?

Comment author: [deleted] 17 July 2013 05:09:49AM -1 points [-]

I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot.

You feel sympathy for animals, and more sympathy for humans. I don't think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: "I don't care about animals at all because animals and humans are ontologically distinct."

Why not just admit that you care about both, just differently, and do whatever seems best from there?

Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.

Comment author: simplicio 15 July 2013 06:06:56PM 5 points [-]

First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.

Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more.

Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.

To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."

Comment author: ChristianKl 15 July 2013 06:49:50AM 2 points [-]

Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?

What your criteria for granting personhood. Is it binary?

Comment author: Raiden 16 July 2013 03:13:35AM 3 points [-]

I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.

It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.

Comment author: Qiaochu_Yuan 15 July 2013 05:33:08AM 1 point [-]

Why do you assume you're confused?

Comment author: Raiden 16 July 2013 03:08:10AM 0 points [-]

Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.

Comment author: somervta 15 July 2013 02:32:26AM 1 point [-]

Three hypothesis which may not be mutually exclusive:

1) Some people disagree (with you) about whether or not some animals are persons.

2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration - here you've stipulated 'people' as 'things subject to moral concern', but that word may too connotative laden for this to be effective.

3) Some people disagree (with you) about 'person'/'being worthy of moral consideration' being a binary category.

Comment author: drethelin 14 July 2013 11:20:27PM 3 points [-]

Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.

Comment author: Raiden 15 July 2013 01:44:31AM 0 points [-]

I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.

Comment author: Baughn 15 July 2013 11:10:29AM 0 points [-]

Is that a normative 'should' or a descriptive 'should'?

If the latter, where would it come from? :-)

Comment author: CronoDAS 14 July 2013 09:30:28PM 1 point [-]

Is it okay to ask completely off-topic questions in a thread like this?

Comment author: gothgirl420666 14 July 2013 09:47:01PM 5 points [-]

As the thread creator, I don't really care.

Comment author: drethelin 14 July 2013 05:58:24PM 9 points [-]

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Comment author: MrMind 15 July 2013 09:41:47AM *  1 point [-]

The non-creepy socially accepted way is through body language. Strong eye contact, personal space invasion, prolonged pauses between sentences, purposeful touching of slightly risky area (for women: the lower back, forearms, etc.) all done with a clearly visible smirk.
In some context however the explicitly verbal might be effective, especially if toned down (Hey you're interesting, I want to know you better) or up (Hey, you're really sexy, do you want to go to bed with me?), but it is highly dependent on the woman.
I'm not entirely sure what's the parameter here, but I suspect plausible deniability is involved.

Comment author: ChristianKl 15 July 2013 06:51:17AM -1 points [-]

I don't think that trying to skip the whole mating dance between men and women is a good strategy. Most woman don't make calculated mental decisions about making out with men but instead follow their emotions. Those emotions need the human mating dance.

If you actually want to make out flirtation is usually the way to go.

One way that's pretty safe is to purposefully misunderstand what the other person is saying and frame it as them hitting on you. Yesterday, I chatted with a woman via facebook and she wanted to end the chat by saying that she now has to take a shower.

I replied with: "you want me to picture yourself under the shower..."

A sentence like that doesn't automatically tell the woman that I'm interested in her but should encourage her to update in that direction.

Comment author: [deleted] 16 July 2013 12:12:18PM 1 point [-]

If you actually want to make out flirtation is usually the way to go.

I guess it depends on what your long-term goals are.

Hooking up within seconds of noticing each other is not that uncommon in certain venues, and I haven't noticed any downsides to that.¹ (My inner Umesh says this just means I don't do that often enough, and I guess he does have a point, though I don't know whether it's relevant.) Granted, that's unlikely to result in a relationship, but that's not what drethelin is seeking anyway.


  1. Unless you count the fact that you are standing, which if the other person is over a foot shorter than you and your lower body strength and sense of balance are much worse than usual due to tipsiness, tiredness, severe sleep deprivation and not having exercised in a week, can be troublesome if you don't pay attention to where your damn centre of gravity is.
Comment author: David_Gerard 15 July 2013 07:26:03AM 6 points [-]

Boy did that set off my creep detector.

Comment author: ChristianKl 15 July 2013 08:24:49AM 3 points [-]

Of course it always depends on your preexisting relationship and other factors. You always have to calibrate to the situation at hand.

A lot of people automatically form images in their mind if you tell them something to process the thought. I know the girl in question from an NLP/hypnosis context, so she should be aware on some level that language works that way.

In general girls are also more likely to be aware that language has many lavers of meaning besides communicating facts.

Comment author: [deleted] 16 July 2013 12:00:15PM *  0 points [-]

A lot of people automatically form images in their mind if you tell them something to process the thought.

...oh.

recalls times he has told single female friends he was going to take a shower of vice versa; lots of times

considers searching Facebook chat log for words for ‘shower’

Fuck that. A photo of me half naked was my profile picture for a long time, and there are videos of me performing strip teases on there, so what people picture when I tell them I'm going to wash myself shouldn't be among my top concerns.

(Anyway, how do I recognize that kind of people? Feynman figured that out about (IIRC) Bethe because the latter could count in his head while talking but not while reading, but that kind of situations don't come up that often.)

Comment author: ChristianKl 16 July 2013 01:42:37PM 1 point [-]

...oh.

recalls times he has told single female friends he was going to take a shower of vice versa; lots of times

Communication has many levels. If I tell you not to think of a pink elephant, on one hand I do tell you that you should try not to think of a pink elephant. On the other hand I give you a suggestion to think of a pink elephant and most people follow that suggestion and do think of a pink elephant.

Different people do that to different extends. There are people who easily form very detailed pictures in their mind and other people who don't follow such suggestion as easily.

On of the things you learn in hypnosis is to put pictures into people heads through principles like that where the suggestion doesn't get criticially analysed.

There are etiquette rules that suggest that it's impolite in certain situation to say "I'm going to the toilet", because of those reasons.

Communication that text based usually doesn't give suggestions that are as strong as in person suggestions. After all the person already perceives the text visually.

As a rule of thumb people look upwards when they process internal images, but that doesn't always happen and not every time someone looks upwards he processes an internal image. That what gets taught in NLP courses. There are some scientific studies that suggest that isn't the case. Those studies have some problems because the don't have good controls about whether a person really thinks about pictures. In any case I don't think recognising such things is something you can easily learn via reading a discussion like this or a book. It rather takes a lot of in person training.

But I don't think you get very far in seducing woman by trying to use such tricks to let woman form naked images of you. There are PUA people who try that under the label "speed seduction" with generally very little results.

Trying to use language that way usually get's people inside their heads. Emotions are more important than images.

You might want to read http://en.wikipedia.org/wiki/Four-sides_model .

If something a woman says to you in a casual context you can think about whether there's a plausible story about how the woman says what she says to signal attraction to you.

Comment author: [deleted] 20 July 2013 10:57:47PM *  0 points [-]

Now that I know how it feels to listen to someone talking about a Feynman diagram while driving on a motorway, I get your points. :-)

Comment author: [deleted] 20 July 2013 10:56:19PM 0 points [-]

There are etiquette rules that suggest that it's impolite in certain situation to say "I'm going to the toilet", because of those reasons.

I don't think that's the reason, because if it was it would apply regardless of which words you use, whatever their literal meaning, so long as it's reasonably unambiguous in the context (why would “the ladies' room” or “talk to a man about a horse” be any less problematic, when the listener knows what you mean?), and it wouldn't depend on which side of the pond you're on (ISTM that “toilet” is less often replaced by euphemisms in BrE than in AmE).

Comment author: ChristianKl 21 July 2013 09:31:53AM 0 points [-]

I don't think that's the reason, because if it was it would apply regardless of which words you use, whatever their literal meaning, so long as it's reasonably unambiguous in the context (why would “the ladies' room” or “talk to a man about a horse” be any less problematic, when the listener knows what you mean?)

When a woman goes to the ladies room she might also go to fix up her makeup or hairstyle. Secondly words matter. Words trigger thoughts. If you speak in deep metaphars you will produce less images than if you describe something in detail.

(ISTM that “toilet” is less often replaced by euphemisms in BrE than in AmE).

Americans are more uptight about intimicy, so that fits nicely. They have a stronger ban on cureswords on US television than in Great Britian. I would also expect more people in Bible Belt stats to use suuch euphemisms than in California.

Comment author: bogus 21 July 2013 12:23:18PM *  3 points [-]

Fun fact: Brits and Americans actually use the word 'toilet' in very different ways. An American goes to the restroom and sits on the toilet; a Brit goes to the toilet and sits on the loo. When a Brit hears the word 'toilet', he's thinking about the room, not the implement.

Comment author: [deleted] 21 July 2013 10:01:42AM *  0 points [-]

When a woman goes to the ladies room she might also go to fix up her makeup or hairstyle.

She can do the same things in the toilet too, can't she?

If you speak in deep metaphars you will produce less images than if you describe something in detail.

But once a metaphor becomes common enough, it stops being a metaphor: if I'm saying that I'm checking my time, is that a chess metaphor? For that matter, “toilet” didn't etymologically mean what it means now either -- it originally referred to a piece of cloth. So, yes, words trigger thoughts, but they don't to that based on their etymology, but based on what situations the listener associates them with.

(Why are specifying Great Britain, anyway? How different are things in NI than in the rest of the UK? I only spent a few days there, hardly any of which watching TV.)

Comment author: ChristianKl 21 July 2013 11:00:26AM 1 point [-]

She can do the same things in the toilet too, can't she?

Yes, but that image isn't as directly conjured up by the word toilet.

I'm also not saying that the term ladies room will never conjure up the same image just that it is less likely to do so.

Furthermore, if you are in a culture where some people use euphemisms while others do not, you signal something by your choice to either use or not use the euphemisms.

Of course what you signal is different when you are conscious that the other person consciously notices that you make that choice than when it happens on a more unconscious level.

(Why are specifying Great Britain, anyway? How different are things in NI than in the rest of the UK? I only spent a few days there, hardly any of which watching TV.)

I didn't intent any special meaning there.

Comment author: [deleted] 18 July 2013 12:24:45PM 0 points [-]

But I don't think you get very far in seducing woman by trying to use such tricks to let woman form naked images of you.

That's not something I'd want to do anyway. (That's why my reaction in the first couple seconds after I read your comment was being worried that I might have done that by accident. Then I decided that if someone was that susceptible there would likely be much bigger issues anyway.)

Comment author: David_Gerard 15 July 2013 05:56:58PM 2 points [-]

Yeah, sorry, I should have garnished that more. "Without knowing more context ..."

Comment author: ChristianKl 16 July 2013 12:35:58PM 1 point [-]

I think that a good lesson for all kind of flirting, there no one-side fits all solution to signal it but you always have to react to the specific context at hand.

Comment author: MileyCyrus 15 July 2013 11:00:25AM 3 points [-]

In general girls are also more likely to be aware that language has many lavers of meaning besides communicating facts.

Please say "women" unless you are talking about female humans that have not reached adulthood.

Comment author: ChristianKl 15 July 2013 12:20:37PM *  2 points [-]

Please say "women" unless you are talking about female humans that have not reached adulthood.

That only one meaning of the word. If you look at websters, I think the meaning to which I'm refering here is: "c : a young unmarried woman".

That's the reference class that I talk about when I speak about flirtation. I don't interact with a 60 year old woman the same way as I do with a young unmarried woman.

Comment author: [deleted] 16 July 2013 11:26:36AM 1 point [-]

Do women forget whether language has many layers of meaning besides communicating facts once they get married or grow old?

Unmarried women are more likely than whom to be aware of that? Than everyone else? Than unmarried men? Than married women? Than David_Gerald?

Comment author: wedrifid 14 July 2013 10:29:34PM 10 points [-]

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).

Comment author: CronoDAS 14 July 2013 10:24:52PM 1 point [-]

Tell a few friends, and let them do the asking for you?

Comment author: drethelin 14 July 2013 11:21:01PM 2 points [-]

The volume of people to whom I tend to be attracted to would make this pretty infeasible.

Comment author: CronoDAS 15 July 2013 12:31:02AM *  3 points [-]

Well, outside of contexts where people are expected to be hitting on each other (dance clubs, parties, speed dating events, OKCupid, etc.) it's hard to advertise yourself to strangers without in being socially inappropriate. On the other hand, within an already defined social circle that's been operating a while, people do tend to find out who is single and who isn't.

I guess you could try a T-shirt?

Comment author: drethelin 15 July 2013 01:05:28AM 1 point [-]

It's not a question of being single, I'm actually in a relationship. However, the relationship is open and I would love it if I could interact physically with more people, just as a casual thing that happens. When I said telling everyone I met "you're cute want to make out" everyone was a lot closer to accurate than you may think when the average person would say it in that context.

Comment author: CronoDAS 15 July 2013 01:34:30AM *  4 points [-]

Ah. So you need a more complicated T-shirt!

Incidentally, if you're interested in making out with men who are attracted to your gender, "you're cute want to make out" may indeed be reasonably effective. Although, given that you're asking this question on this forum, I think I can assume you're a heterosexual male, in which case that advice isn't very helpful.

Comment author: [deleted] 14 July 2013 05:54:40PM *  1 point [-]

How does a rational consequentialist altruist think about moral luck and butterflies?

http://leftoversoup.com/archive.php?num=226

Comment author: Qiaochu_Yuan 14 July 2013 06:03:20PM 11 points [-]

There's no point in worrying about the unpredictable consequences of your actions because you have no way of reliably affecting them by changing your actions.

Comment author: JoshuaFox 14 July 2013 05:14:25PM 6 points [-]

How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?

Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.

Comment author: RomeoStevens 15 July 2013 09:40:09PM 6 points [-]

Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.

Comment author: Error 15 July 2013 04:13:27PM *  1 point [-]

I wish I could upvote this question more. People assuming that I meant more than exactly what I said drives me up the wall, and I don't know how to deal with it either. (but Qiaochu's response below is good)

The most common failure mode I've experienced is the assumption that believing equals endorsing. One of the gratifying aspects of participating here is not having to deal with that; pretty much everyone on LW is inoculated.

Comment author: RomeoStevens 15 July 2013 09:38:39PM 5 points [-]

Be cautious, the vast majority do not make strict demarcation between normative and positive statements inside their head. Figuring this out massively improved my models of other people.

Comment author: Error 16 July 2013 11:19:54AM 0 points [-]

That makes life difficult when I want to say "X is true (but not necessarily good)"

For example, your statement is true but I'm not terribly happy about it. ;-)

Comment author: Qiaochu_Yuan 14 July 2013 05:48:24PM 10 points [-]

Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.

(non-existent)

This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.

Comment author: mwengler 14 July 2013 03:13:28PM *  4 points [-]

"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?

Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?

In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.

Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.

SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?

This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.

Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?

Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?

Comment author: Larks 15 July 2013 09:24:15AM 5 points [-]

In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact

Surely we are the native americans, trying to avoid dying of Typhus when the colonists accidentally kill us in their pursuit of paperclips.

Comment author: Leonhart 14 July 2013 08:48:53PM *  7 points [-]

Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!

Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!

Comment author: gwern 15 July 2013 02:29:57AM 8 points [-]

Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:

"The courage to keep your secret to yourself!"

"The courage to lie to your lover!"

"The courage to betray your comrades!"

"The courage to be a lazy bum!"

"The courage to admit defeat!"

Comment author: Qiaochu_Yuan 14 July 2013 06:04:03PM *  6 points [-]

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility.

Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.

Comment author: somervta 15 July 2013 02:34:54AM 0 points [-]

So you wouldn't care about sentient/sapient aliens?

Comment author: Qiaochu_Yuan 15 July 2013 03:10:06AM 4 points [-]

I would care about aliens that I could get along with.

Comment author: pedanterrific 17 July 2013 06:23:20PM -1 points [-]

Do you not care about humans you can't get along with?

Comment author: Qiaochu_Yuan 17 July 2013 07:04:37PM 3 points [-]

Look, let's not keep doing this thing where whenever someone fails to completely specify their utility function you take whatever partial heuristic they wrote down and try to poke holes in it. I already had this conversation in the comments to this post and I don't feel like having it again. Steelmanning is important in this context given complexity of value.

Comment author: wedrifid 17 July 2013 06:33:29PM *  1 point [-]

Do you not care about humans you can't get along with?

Caring about all humans and (only) cooperative aliens would not be an inconsistent or particularly atypical value system.

Comment author: Sarokrae 14 July 2013 09:23:19AM 3 points [-]

In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?

What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.

Comment author: someonewrongonthenet 18 August 2013 02:33:08PM *  1 point [-]

briefly describe the "subagents" and their personalities/goals?

Comment author: Sarokrae 18 August 2013 05:52:11PM *  0 points [-]

A non-exhaustive list of them in very approximate descending order of average loudness:

  • Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)

  • Learning interesting things

  • Sex (and related brain chemistry feelings)

  • Love (and related brain chemistry feelings)

  • Empathy and care for other humans

  • Prestige and status

  • Epistemic rationality

  • Material comfort

I notice the problem mainly as the loudness of "Offspring" varies based on hormone levels, whereas "Learning new things" doesn't. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where "learning new things" gets up there it isn't.

Comment author: D_Malik 15 July 2013 11:10:25PM 1 point [-]

My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It's working well, according to the dominant subagent.

Comment author: Sarokrae 17 July 2013 07:24:04AM *  0 points [-]

My other subagents consider that such an appalling outcome that my processor agent refuses to even consider the possibility...

Though given this, it seems likely that I do have some degree of built-in weighting, I just don't realise what it is yet. That's quite reassuring.

Edit: More clarification in case my situation is different from yours: my 3 main subagents have such different aims that each of them evokes a "paper-clipper" sense of confusion in the others. Also, a likely reason why I refuse to consider it is because all of them are hard-wired into my emotions, and my emotions are one of the inputs my processing takes. This doesn't bode well for my current weighting being consistent (and Dutch-book-proof).

Comment author: NancyLebovitz 17 July 2013 01:58:47PM 0 points [-]

What does your processor agent want?

Comment author: Sarokrae 18 July 2013 10:20:58AM 0 points [-]

I'm not entirely sure. What questions could I ask myself to figure this out? (I suspect figuring this out is equivalent to answering my original question)

Comment author: NancyLebovitz 20 July 2013 11:51:43AM 2 points [-]

What choices does your processor agent tend to make? Under what circumstances does it favor particular sub-agents?

Comment author: Sarokrae 21 July 2013 08:21:23PM 1 point [-]

"Whichever subagent currently talks in the "loudest" voice in my head" seems to be the only way I could describe it. However, "volume" doesn't lend to a consistent weighting because it varies, and I'm pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.

Comment author: Qiaochu_Yuan 14 July 2013 05:50:16PM 1 point [-]

My understanding is that this is what Internal Family Systems is for.

Comment author: Sarokrae 15 July 2013 07:04:37AM 3 points [-]

So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It's really not that complicated: I just have multiple terminal values which don't come with a natural weighting, and I find balancing them against each other hard.

Comment author: kilobug 14 July 2013 08:40:31AM 3 points [-]

With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of course doesn't work well...

I am the only one with that kind of problems, reading several fanfictions occurring in the same base universe ? It's the first time I try to do that, and I didn't except being so confused. Do you have some advices to avoid the confusion, like "wait at least one week (or month ?) before jumping to a different fanfiction" ?

Comment author: pop 17 July 2013 01:50:19AM 0 points [-]

My advice: Don't read them all, choose a couple that's interesting and go with it. If you have to read them all (looks like you have the time) do it more sequentially.

Comment author: roryokane 15 July 2013 07:11:37AM 1 point [-]

For one thing, I try not to read many in-progress fanfics. I’ve been burned so many times by starting to read a story and finding out that it’s abandoned that I rarely start reading new incomplete stories – at least with an expectation of them being finished. That means I don’t have to remember so many things at once – when I finish reading one fanfiction, I can forget it. Even if it’s incomplete, I usually don’t try to check back on it unless it has a fast update schedule – I leave it for later, knowing I’ll eventually look at my Favorites list again and read the newly-finished stories.

I also think of the stories in terms of a fictional multiverse, like the ones in Dimension Hopping for Beginners and the Stormseeker series (both recommended). I like seeing the different viewpoints on and versions of a universe. So that might be a way for you to tie all of the stories together – think of them as offshoots of canon, usually sharing little else.

I also have a personal rule that whenever I finish reading a big story that could take some digesting, I shouldn’t read any more fanfiction (from any fandom) until the next day. This rule is mainly to maximize what I get out of the story and prevent mindless, time-wasting reading. But it also lessens my confusing the stories with each other – it still happens, but only sometimes when I read two big stories on successive days.

Comment author: David_Gerard 14 July 2013 10:36:36PM 2 points [-]

Write up your understanding of the melange, obviously.

Comment author: [deleted] 14 July 2013 01:10:55AM 4 points [-]

The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?

Comment author: James_Miller 14 July 2013 02:29:01AM 8 points [-]

The sun.

Comment author: CronoDAS 14 July 2013 08:31:33AM 2 points [-]

Almost all the available energy on Earth originally came from the Sun; the only other sources I know of are radioactive elements within the Earth and the rotation of the Earth-Moon system.

So even if it's not from the sun's current output, it's probably going to be from the sun's past output.

Comment author: [deleted] 15 July 2013 08:28:25AM 3 points [-]

Hydrogen for fusion is also available on the Earth and didn't come from the Sun. We can't exploit it commercially yet, but that's just an engineering problem. (Yes, if you want to be pedantic, we need primordial deuterium and synthesized tritium, because proton-proton fusion is far beyond our capabilities. However, D-T's ingredients still don't come from the Sun.)

Comment author: CronoDAS 15 July 2013 08:32:34AM 0 points [-]

Yes. Good call.

Comment author: hylleddin 15 July 2013 08:20:12AM 0 points [-]

They could probably get a decent amount from fusing light elements as well.

Comment author: Turgurth 14 July 2013 12:01:35AM 6 points [-]

Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.

Comment author: NancyLebovitz 16 July 2013 02:45:08PM 1 point [-]

What do you want to be more rational about?

Comment author: Turgurth 17 July 2013 09:26:36AM 0 points [-]

I suppose the first step would be being more instrumentally rational about what I should be instrumentally rational about. What are the goals that are most worth achieving, or, what are my values?

Comment author: MrMind 15 July 2013 09:28:13AM 1 point [-]

Reading "Diaminds" holds the promise to be on the track of making me a better rationalist, but so far I cannot say that with certainty, I'm only at the second chapter (source: recommendation here on LW, also the first chapter is dedicated to explaining the methodology, and the authors seem to be good rationalists, very aware of all the involved bias).

Also "dual n-back training" via dedicated software improves short term memory, which seems to have a direct impact on our fluid intelligence (source: vaguely remembered discussion here on LW, plus the bulletproofexec blog).

Comment author: Qiaochu_Yuan 14 July 2013 12:12:39AM 9 points [-]

Attend a CFAR workshop!

Comment author: [deleted] 14 July 2013 06:55:17AM *  7 points [-]

I think many people would find this advice rather impractical. What about people who (1) cannot afford to pay USD3900 to attend the workshop (as I understand it, scholarships offered by CFAR are limited in number), and/or (2) cannot afford to spend the time/money travelling to the Bay Area?

Comment author: palladias 14 July 2013 12:42:25PM 5 points [-]

We do offer a number of scholarships. If that's your main concern, apply and see what we have available. (Applying isn't a promise to attend). If the distance is your main problem, we're coming to NYC and you can pitch us to come to your city.

Comment author: Qiaochu_Yuan 14 July 2013 07:33:52AM *  0 points [-]

First of all, the question was "what are some resources," not "what should I do." A CFAR workshop is one option of many (although it's the best option I know of). It's good to know what your options are even if some of them are difficult to take. Second, that scholarships are limited does not imply that they do not exist. Third, the cost should be weighed against the value of attending, which I personally have reason to believe is quite high (disclaimer: I occasionally volunteer for CFAR).

Comment author: CoffeeStain 13 July 2013 11:17:35PM 9 points [-]

How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.

My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.

Comment author: CronoDAS 14 July 2013 09:24:54PM 5 points [-]

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Comment author: Vaniver 15 July 2013 04:06:39AM 3 points [-]

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Much of it boils down to gothgirl420666's advice, except with more technical help on how. (I think the book is well worth reading, but it basically outlines "these are places where you can expend effort to make other people happier.")

Comment author: ChristianKl 15 July 2013 08:59:53AM *  2 points [-]

One of the tips from Carnegie that gothgirl420666 doesn't mention is using people names.

Learn them and use them a lot in coversation. Great them with their name.

Say thing like: "I agree with you, John." or "There I disagree with you, John."

Comment author: Vaniver 15 July 2013 06:06:28PM *  2 points [-]

This is a piece of advice that most people disagree with, and so I am reluctant to endorse it. Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

(While we're on the subject of recommendations I disagree with, Carnegie recommends recording people's birthdays, and sending them a note or a call. This used to be a lot more impressive before systems to automatically do that existed, and in an age of Facebook I don't think it's worth putting effort into. Those are the only two from the book that I remember thinking were unwise.)

Comment author: ChristianKl 16 July 2013 08:48:51AM *  2 points [-]

Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

It probably depends on the context. If you in a context like a sales conversation people might get cautious. In other context you might like a person trying to be nice to you.

But you are right that there the issue of artificialness. It can be strange if things don't flow naturally. I think that's more a matter of how you do it rather than how much or when.

At the beginning, just starting to greet people with their name can be a step forward. I think in most cultures that's an appropriate thing to do, even if not everyone does it.

I would also add that I'm from Germany, so my cultural background is a bit different than the American one.

Comment author: RomeoStevens 15 July 2013 09:36:46PM 4 points [-]

Be judicious, and name drop with one level of indirection. "That's sort of what like John was saying earlier I believe yada yada."

Comment author: fubarobfusco 15 July 2013 05:27:58PM -1 points [-]

This is how to sound like a smarmy salesperson who's read Dale Carnegie.

Comment author: mwengler 14 July 2013 02:24:58PM 7 points [-]

In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.

So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.

Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.

Comment author: Sarokrae 14 July 2013 09:01:42AM 5 points [-]

I second what gothgirl said; but in case you were looking for more concrete advice:

  1. Exchange compliments. Accept compliments graciously but modestly (e.g. "Thanks, that's kind of you").
  2. Increase your sense of humour (watching comedy, reading jokes) until it's at population average levels, if it's not there.
  3. Practise considering other people's point of view.
  4. Do those three things consciously for long enough that you start doing them automatically.

At least, that's what worked for me when I was younger. Especially 1 actually, I think it helped with 3.

Comment author: gothgirl420666 14 July 2013 03:54:29AM *  31 points [-]

This was a big realization for me personally:

If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".

Also, Succeed Socially is a good resource.

Comment author: someonewrongonthenet 18 August 2013 01:59:27PM *  1 point [-]

Another tool to achieve likeability is to consistently project positive emotions and create the perception that you are happy and enjoying the interaction. The quickest way to make someone like you is to create the perception that you like them because they make you happy - this is of course much easier if you genuinely do enjoy social interactions.

he or she feels comfortable and happy on a moment-by-moment basis

It is very good advice to care about other people.

I'd like to add that I think it is common for the insecure to do this strategy in the wrong way. "Showing off" by is a failure mode, but "people pleaser' can be a failure mode as well - it's important that making others happy doesn't come off as a transaction in exchange for acceptance.

"Look how awesome I am and accept me" vs "Please accept me, I'll make you happy" vs "I accept you, you make me happy".

Comment author: Creutzer 18 July 2013 05:22:42AM 0 points [-]

This sounds immensely plausible. But it immediately prompts the more specific question: how on earth do you make people feel comfortable and happy on a moment-by-moment basis around you?

Especially if you're an introvert who lives in his own head rather a lot. Maybe the right question (for some) is: how do you get people to like you if, in a way, you are self-centered? It pretty much seems to mean that you're screwed.

Comment author: NancyLebovitz 20 July 2013 11:49:33AM 1 point [-]

This looks to me like a bunch of reasonable questions.

Comment author: Creutzer 20 July 2013 12:13:36PM 1 point [-]

I had written the comment before reading on and then retracted it because the how-question is discussed below.

Comment author: [deleted] 14 July 2013 11:04:42PM *  8 points [-]

Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.

Do what your comparative advantage is.

Comment author: CoffeeStain 14 July 2013 04:56:16AM 1 point [-]

Thank you, so very much.

I often forget that there are different ways to optimize, and the method that feels like it offers the most control is often the worst. And the one I usually take, unfortunately.

Comment author: drethelin 14 July 2013 03:05:05AM 4 points [-]

You can be self-centered and not act that way. If you even pretend to care about most people's lives they will care more about yours.

If you want to do this without being crazy bored and feeling terrible, I recommend figuring out conversation topics of other people's lives that you actually enjoy listening people talk about, and also working on being friends with people who do interesting things. In a college town, asking someone their major is quite often going to be enjoyable for them and if you're interested and have some knowledge of a wide variety of fields you can easily find out interesting things.

Comment author: Craig_Heldreth 13 July 2013 09:54:49PM *  1 point [-]

Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty? His ethos consisted of S.M.I**2.L.E, i.e. Space Migration + Intelligence Increase + Life Extension which seems like it should be right up your alley to me. His books are not well-organized; his live presentations and tapes had some wide appeal.

Comment author: KrisC 17 July 2013 11:59:54PM 1 point [-]

Leary won me over with those goals. I have adopted them as my own.

It's the 8 circuits and the rest of the mysticism I reject. Some of it rings true, some of it seems sloppy, but I doubt any of it is useful for this audience.

Comment author: timtyler 14 July 2013 11:59:30AM -1 points [-]

Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty?

Probably an attempt to avoid association with druggie disreputables.

Comment author: Qiaochu_Yuan 14 July 2013 12:14:16AM 5 points [-]

I am generally surprised when people say things like "I am surprised that topic X has not come up in forum / thread Y yet." The set of all possible things forum / thread Y could be talking about is extremely large. It is not in fact surprising that at least one such topic X exists.

Comment author: Manfred 13 July 2013 10:46:12PM 11 points [-]

Write up a discussion post with an overview of what you think we'd find novel :)

Comment author: [deleted] 13 July 2013 09:40:31PM 5 points [-]

I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)

Comment author: Error 15 July 2013 03:08:34PM *  2 points [-]

I actually have a book on exactly this subject: Absinthe and Flamethrowers. The author's aim is to show you ways to take real but controllable risks.

I can't vouch for its quality since I haven't read it yet, but it exists. And, y'know. Flamethowers.

Comment author: Jayson_Virissimo 15 July 2013 05:52:08AM *  8 points [-]

Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.

Comment author: mwengler 14 July 2013 02:31:38PM 3 points [-]

Going for the feeling without the actual downside? Play video games MMPRPGs. Shoot zombies until they finally overwhelm you. Shoot cops in vice city until the army comes after you. Jump out of helicopters.

I really liked therufs suggestion list below. The downside, the thing you are risking in each of these, doesn't actually harm you, it makes you stronger.

Comment author: [deleted] 14 July 2013 05:20:52AM *  11 points [-]

Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.

Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...

Climb a tree.

Comment author: therufs 14 July 2013 04:25:45AM 12 points [-]
  • Talk to a stranger
  • Don't use a GPS
  • Try a new food/restaurant
  • If you usually drive, try getting somewhere on public transit
  • Sign up for a Coursera class (that's actually happening, so you have the option to be graded.) (Note: this will be a small risk on a daily basis for many consecutive days)
  • Go to a meetup at a library or game store
Comment author: [deleted] 22 July 2013 06:50:52PM 0 points [-]

If you usually drive, try getting somewhere on public transit

Ain't most forms of that less dangerous (per mile) than driving? (Then again, certain people have miscalibrated aliefs about that.)

Comment author: satt 22 July 2013 01:13:25PM 1 point [-]

Another transport one: if you regularly go to the same place, experiment with a different route each time.

Comment author: Qiaochu_Yuan 14 July 2013 12:17:16AM 2 points [-]

When you go out to eat with friends, randomly choose who pays for the meal. In the long run this only increases the variance of your money. I think it's fun.

Comment author: BrassLion 15 July 2013 03:40:32AM 7 points [-]

This is likely to increase the total bill, much like how splitting the check evenly instead of strictly paying for what you ordered increases the total bill.

Comment author: [deleted] 16 July 2013 12:03:51PM *  0 points [-]

splitting the check evenly instead of strictly paying for what you ordered increases the total bill

But it saves the time and the effort needed to compute each person's bill -- you just need one division rather than a shitload of additions.

Comment author: Larks 15 July 2013 09:28:02AM 2 points [-]

Assign the probabilities in proportion to each person's fraction of the overall bill. Incentives are aligned.

Comment author: Qiaochu_Yuan 15 July 2013 05:31:36AM 2 points [-]

I haven't observed this happening among my friends. Maybe if you only go out to dinner with homo economicus...

Comment author: D_Malik 15 July 2013 10:21:04PM 3 points [-]

This is called the unscrupulous diner's dilemma, and experiments say that not only do people (strangers) respond to it like homo economicus, their utility functions seem to not even have terms for each other's welfare. Maybe you eat with people who are impression-optimizing (and mathy, so that they know the other person knows indulging is mean), and/or genuinely care about each other.