Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW events near the Singularity Summit?

5 AdeleneDawner 30 September 2011 08:54PM

I hadn't planned on going to the Summit, but it looks like I might be in NYC on the 15th for something unrelated. I don't have specific plans for what I'm going to do with my time yet, and I'm wondering if there might be some interesting events/meetups near the Summit that would be worth going to.

I could possibly arrange to stay overnight if there's something sufficiently interesting late on the 15th or during the day on the 16th, but as it stands I'll be in NYC from 8am to 8pm on the 15th, assuming I make the trip at all.

'Newcomblike' Video Game: Frozen Synapse

2 AdeleneDawner 29 September 2011 04:26AM

Disregarding for the moment the question of whether video games are a rational use of one's time:

Frozen Synapse is a turn based strategy combat game that appears to be particularly interesting from a rationalist standpoint. I haven't played it, but according to the reviews, it's actually a combination of turn-based and real-time play. Each turn encompasses 5 seconds of realtime, but that 5 seconds of realtime doesn't happen until both players have constructed their moves, which they may take as long as they'd like to do. Constructing a move involves giving your several units and your opponent's several units commands, watching what happens when the units play out those commands, and repeating that process until one has a set of commands for one's units that one considers optimal given what one predicts one's opponent will do. This happens on a procedurally-generated battlefield; there are reports of this occasionally giving one player or the other an insurmountable advantage, but the reviews seem to indicate that being able to play on a fresh field each time and having to think about proper use of its layout on the fly outweighs this issue.

Also, the game came to my attention because there's a Humble Bundle available for it now, which means that it can be acquired very nearly for free; just ignore the 'beat the average to get more games' hook.

[Conversation Log] Compartmentalization

5 AdeleneDawner 30 July 2011 12:51AM

(7:40:37 PM) handoflixue: Had an odd thought recently, and am trying to see if I understand the idea of compartmentalization.
(7:41:08 PM) handoflixue: I've always acted in a way, whereupon if I'm playing WOW, I roleplay an elf. If I'm at church, I roleplay a unitarian. If I'm on LessWrong, I roleplay a rationalist.
(7:41:31 PM) handoflixue: And for the most part, these are three separate boxes. My elf is not a rationalist nor a unitarian, and I don't apply the Litany of Tarski to church.
(7:41:49 PM) handoflixue: And I realized I'm *assuming* this is what people mean by compartmentalizing.
(7:42:11 PM) handoflixue: But I also had some *really* interesting assumptions about what people meant by religion and spiritual and such, so it's probably smart to step back and check ^^
(7:43:45 PM) Adelene: I'm actually not sure what's usually meant by the concept (which I don't actually use), but that's not the guess I came up with when you first asked, and I think mine works a little better.
(7:44:50 PM) handoflixue: Then I am glad I asked! :)
(7:45:24 PM) Adelene: My guess is something along the lines of this: Compartmentalizing is when one has several models of how the world works, which predict different things about the same situations, and uses arbitrary, social, or emotional methods rather than logical methods to decide which model to use where.
(7:46:54 PM) handoflixue: Ahhhh
(7:47:05 PM) handoflixue: So it's not having different models, it's being alogical about choosing a method/
(7:47:08 PM) handoflixue: ?
(7:47:14 PM) Adelene: That's my guess, yes.
(7:47:37 PM) Adelene: I do think that it's specifically not just about having different behavioral habits in different situations.
(7:48:00 PM) Adelene: (Which is what I think you mean by 'roleplay as'.)
(7:49:21 PM) handoflixue: It's not *exactly* different situations, though. That's just a convenient reference point, and the process that usually develops new modes. I can be an elf on LessWrong, or a rationalist WOW player, too.
(7:49:53 PM) Adelene: Also, with regards to the models model, some models don't seem to be reliable at all from a logical standpoint, so it's fairly safe to assume that someone who uses such a model in any situation is compartmentalizing.
(7:50:34 PM) handoflixue: But the goddess really does talk to me during rites >.>;
(7:51:16 PM) Adelene: ...okay, maybe that's not the best wording of that concept.
(7:51:33 PM) handoflixue: It's a concept I tend to have trouble with, too, I'll admit
(7:51:36 PM) handoflixue: I... mmm.
(7:51:56 PM) handoflixue: Eh :)
(7:52:18 PM) Adelene: I'm trying to get at a more 'mainstream christianity model' type thing, with that - most Christians I've known don't actually expect any kind of feedback at all from God.
(7:53:00 PM) Adelene: Whereas your model at least seems to make some useful predictions about your mindstates in response to certain stimulii.
(7:53:20 PM) handoflixue: .. but that would be stupid >.>
(7:53:26 PM) Adelene: eh?
(7:53:50 PM) handoflixue: If they don't ... get anything out of it, that would be stupid to do it o.o
(7:54:11 PM) Adelene: Oh, Christians? They get social stuff out of it.
(7:54:35 PM) handoflixue: *nods* So... it's beneficial.
(7:54:46 PM) Adelene: But still compartment-ey.
(7:55:10 PM) Adelene: I listed 'social' in the reasons one might use an illogical model on purpose. :)
(7:55:25 PM) handoflixue: Hmmmm.
(7:56:05 PM) handoflixue: I wish I knew actual Christians I could ask about this ^^;
(7:56:22 PM) Adelene: They're not hard to find, I hear. ^.-
(7:56:27 PM) handoflixue: ... huh
(7:56:42 PM) handoflixue: Good point.
(7:57:12 PM) Adelene: Possibly of interest: I worked in a Roman Catholic nursing home - with actual nuns! - for four years.
(7:57:25 PM) handoflixue: Ooh, that is useful :)
(7:57:38 PM) handoflixue: I'd rather bug someone who doesn't seem to object to my true motives :)
(7:58:00 PM) Adelene: Not that I talked to the nuns much, but there were some definite opportunities for information-gathering.
(7:58:27 PM) handoflixue: Mostly, mmm...
(7:58:34 PM) handoflixue: http://lesswrong.com/lw/1mh/that_magical_click/ Have you read this article?
(7:58:52 PM) Adelene: Not recently, but I remember the gist of it.
(7:59:05 PM) handoflixue: I'm trying to understand the idea of a mind that doesn't click, and I'm trying to understand the idea of how compartmentalizing would somehow *block* that.
(7:59:15 PM) handoflixue: I dunno, the way normal people think baffles me
(7:59:28 PM) Adelene: *nodnods*
(7:59:30 PM) handoflixue: I assumed everyone was playing a really weird game until, um, a few months ago >.>
(7:59:58 PM) Adelene: heh
(8:00:29 PM) Adelene: *ponders not-clicking and compartmentalization*
(8:00:54 PM) handoflixue: It's sort of... all the models I have of people make sense.
(8:00:58 PM) handoflixue: They have to make sense.
(8:01:22 PM) handoflixue: I can understand "Person A is Christian because it benefits them, and the cost of transitioning to a different state is unaffordably high, even if being Atheist would be a net gain"
(8:01:49 PM) Adelene: That's seriously a simplification.
(8:02:00 PM) handoflixue: I'm sure it is ^^
(8:02:47 PM) handoflixue: But that's a model I can understand, because it makes sense. And I can flesh it out in complex ways, such as adding the social penalty that goes in to thinking about defecting, and the ick-field around defecting, and such. But it still models out about that way.
(8:02:58 PM) Adelene: Relevantly, they don't know what the cost of transition actually would be, and they don't know what the benefit would be.
(8:04:51 PM) handoflixue: Mmmm... really?
(8:05:03 PM) handoflixue: I think most people can at least roughly approximate the cost-of-transition
(8:05:19 PM) handoflixue: ("Oh, but I'd lose all my friends! I wouldn't know WHAT to believe anymore")
(8:05:20 PM) Adelene: And also I think most people know on some level that making a transition like that is not really voluntary in any sense once one starts considering it - it happens on a pre-conscious level, and it either does or doesn't without the conscious mind having much say in it (though it can try to deny that the change has happened). So they avoid thinking about it at all unless they have a really good reason to.
(8:05:57 PM) handoflixue: There may be ways for them to mitigate that cost, that they're unaware of ("make friends with an atheist programmers group", "read the metaethics sequence"), but ... that's just ignorance and that makes sense ^^
(8:06:21 PM) Adelene: And what would the cost of those cost-mitigation things be?
(8:07:02 PM) handoflixue: Varies based on whether the person already knows an atheist programmers group I suppose? ^^
(8:07:26 PM) Adelene: Yep. And most people don't, and don't know what it would cost to find and join one.
(8:07:40 PM) handoflixue: The point was more "They can't escape because of the cost, and while there are ways to buy-down that cost, people are usually ignor...
(8:07:41 PM) handoflixue: Ahhhh
(8:07:42 PM) handoflixue: Okay
(8:07:44 PM) handoflixue: Gotcha
(8:07:49 PM) handoflixue: Usually ignorant because *they aren't looking*
(8:08:01 PM) handoflixue: They're not laying down escape routes
(8:08:24 PM) Adelene: And why would they, when they're not planning on escaping?
(8:09:28 PM) handoflixue: Because it's just rational to seek to optimize your life, and you'd have to be stupid to think you're living an optimum life?
(8:10:13 PM) Adelene: uhhhh.... no, most people don't think like that, basically at all.
(8:10:30 PM) handoflixue: Yeah, I know. I just don't quite understand why not >.>
(8:10:54 PM) handoflixue: *ponders*
(8:11:02 PM) handoflixue: So compartmentalization is sorta... not thinking about things?
(8:11:18 PM) Adelene: That's at least a major symptom, yeah.
(8:11:37 PM) handoflixue: Compartmentalization is when model A is never used in situation X
(8:12:17 PM) handoflixue: And, often, when model A is only used in situation Y
(8:12:22 PM) Adelene: And not because model A is specifically designed for simulations of type Y, yes.
(8:12:39 PM) handoflixue: I'd rephrase that to "and not because model A is useless for X"
(8:13:06 PM) Adelene: mmm...
(8:13:08 PM) handoflixue: Quantum physics isn't designed as an argument for cryonics, but eliezer uses it that way.
(8:13:14 PM) Adelene: hold on a sec.
(8:13:16 PM) handoflixue: Kay
(8:16:01 PM) Adelene: The Christian model claims to be useful in lots of situations where it's observably not. For example, a given person's Christian model might say that if they pray, they'll have a miraculous recovery from a disease. Their mainstream-society-memes model, on the other hand, says that going to see a doctor and getting treatment is the way to go. The Christian model is *observably* basically useless in that situation, but I'd still call that compartmentalization if they went with the mainstream-society-memes model but still claimed to primarily follow the Christian one.
(8:16:46 PM) handoflixue: Hmmm, interesting.
(8:16:51 PM) handoflixue: I always just called that "lying" >.>
(8:17:05 PM) handoflixue: (At least, if I'm understanding you right: They do X, claim it's for Y reason, and it's very obviously for Z)
(8:17:27 PM) handoflixue: (Lying-to-self quite possibly, but I still call that lying)
(8:18:00 PM) Adelene: No, no - in my narrative, they never claim that going to a doctor is the Christian thing to do - they just never bring Christianity up in that context.
(8:19:15 PM) handoflixue: Ahhh
(8:19:24 PM) handoflixue: So they're being Selectively Christian?
(8:19:27 PM) Adelene: Yup.
(8:19:37 PM) handoflixue: But I play an elf, and an elf doesn't invest in cryonics.
(8:20:09 PM) handoflixue: So it seems like that's just... having two *different* modes.
(8:20:40 PM) Adelene: I don't think that's intrinsically a problem. The question is how you pick between them.
(8:22:08 PM) handoflixue: Our example Christian seems to be picking sensibly, though.
(8:22:11 PM) Adelene: In the contexts that you consider 'elfy', cryonics might actually not make sense. Or it might be replaced by something else - I bet your elf would snap up an amulet of ha-ha-you-can't-kill-me, fr'ex.
(8:22:26 PM) handoflixue: Heeeh :)
(8:28:51 PM) Adelene: About the Christian example - yes, in that particular case they chose the model for logical reasons - the mainstream model is the logical one because it works, at least reasonably well. It's implied that the person will use the Christian model at least sometimes, though. Say for example they wind up making poor financial decisions because 'God will provide', or something.
(8:29:48 PM) handoflixue: Heh ^^;
(8:29:55 PM) handoflixue: Okay, yeah, that one I'm guilty of >.>
(8:30:05 PM) handoflixue: (In my defense, it keeps *working*)
(8:30:10 PM) Adelene: (I appear to be out of my depth, now. Like I said, this isn't a concept I use. I haven't thought about it much.)
(8:30:22 PM) handoflixue: It's been helpful to define a model for me.
(8:30:33 PM) Adelene: ^^
(8:30:50 PM) handoflixue: The idea that the mistake is not having separate models, but in the application or lack thereof.
(8:31:07 PM) handoflixue: Sort of like how I don't use quantum mechanics to do my taxes.
(8:31:14 PM) handoflixue: Useful model, wrong situation, not compartmentalization.
(8:31:28 PM) Adelene: *nods*
(8:32:09 PM) handoflixue: So, hmmmm.'
(8:32:18 PM) handoflixue: One thing I've noticed in life is that having multiple models is useful
(8:32:32 PM) handoflixue: And one thing I've noticed with a lot of "rationalists" is that they seem not to follow that principle.
(8:33:15 PM) handoflixue: Does that make sense
(8:33:24 PM) Adelene: *nods*
(8:34:13 PM) Adelene: That actually feels related.
(8:35:03 PM) Adelene: People want to think they know how things work, so when they find a tool that's reasonably useful they tend to put more faith in it than it deserves.
(8:35:39 PM) Adelene: Getting burned a couple times seems to break that habit, but sufficiently smart people can avoid that lesson for a surprisingly long time.
(8:35:55 PM) Adelene: Well, sufficiently smart, sufficiently privileged people.
(8:37:15 PM) handoflixue: Heeeh, *nods*
(8:37:18 PM) handoflixue: I seem to ... I dunno
(8:37:24 PM) handoflixue: I grew up on the multi-model mindset.
(8:37:41 PM) handoflixue: It's... a very odd sort of difficult to try and comprehend that other people didnt...
(8:37:47 PM) Adelene: *nods*
(8:38:47 PM) Adelene: A lot of people just avoid things where their preferred model doesn't work altogether. I don't think many LWers are badly guilty of that, but I do suspect that most LWers were raised by people who are.
(8:39:16 PM) handoflixue: Mmmmm...
(8:39:38 PM) handoflixue: I tend to get the feeling that the community-consensus has trouble understanding "but this model genuinely WORKS for a person in this situation"
(8:39:58 PM) handoflixue: With some degree of... just not understanding that ideas are resources too, and they're rather privileged there and in other ways.
(8:40:16 PM) Adelene: That is an interesting way of putting it and I like it.
(8:40:31 PM) handoflixue: Yaaay :)
(8:40:40 PM) Adelene: ^.^
(8:41:01 PM) Adelene: Hmm
(8:41:18 PM) Adelene: It occurs to me that compartmentalization might in a sense be a social form of one-boxing.
(8:41:41 PM) handoflixue: Heh! Go on :)
(8:42:01 PM) Adelene: "For signaling reasons, I follow model X in situation-class Y, even when the results are sub-optimal."
(8:42:59 PM) handoflixue: Hmmmm.
(8:43:36 PM) handoflixue: Going back to previous, though, I think compartmentalization requires some degree of not being *aware* that you're doing it.
(8:43:47 PM) Adelene: Humans are good at that.
(8:43:48 PM) handoflixue: So... what you said, exactly, but on a subconscious level
(8:43:53 PM) Adelene: *nodnods*
(8:44:00 PM) Adelene: I meant subconsciously.

For meetup groups: Restaurant gift card coupons

3 AdeleneDawner 06 July 2011 05:35PM

From here:

Restaurant.com cuts 70% off any gift certificate via coupon code "SHARE". This coupon drops most $10 dining certificates to $1.50 and $25 gift certificates to $3. Restaurant.com's gift certificates are redeemable at local restaurants across the United States. Some gift certificates have restrictions, like dinner-only or a $15 minimum. (Each restaurant lists its individual restrictions.) Coupon expires July 7.

If the consensus is that this is too much like spam, I'll remove it, but it seemed like it'd be of interest to meetup groups who meet at restaurants.

If we're in a sim...

3 AdeleneDawner 05 July 2011 09:22PM

My father, who is home recovering from surgery, emailed the following web page to me and a few other members of my family, and expressed interest in reading interesting responses.

If We Are In A Computer Simulation

Is the universe just a big computer simulation running in another universe? Suppose it is. Then I've got some questions:

  • Are intelligent entities modeled as objects? In other words, are they instantiated and managed as distinct tracked pieces in the simulation? Or is the universe just modeled as huge numbers of subatomic particles and energy?
  • If intelligences aren't modeled as objects are they at least tracked or studied by whoever is running the sim? Or is the evolution of intelligence in the sim just seen as an uninteresting side effect and irrelevant to the sim's purposes?
  • Is The Big Bang the moment the sim started running? Or did it start long before that point? Or has it been started at a much later point with data loading in to make it look like it started earlier?
  • Do the creators of the sim intervene as it runs? Do they patch it?
  • What is the purpose of the sim? Entertainment? Political decision-making? Scientific research?
  • Were the physical laws of the universe designed to reduce the computational cost of the sim? If so, what aspects of the physical laws were designed to make computation cheaper?

Imagine the purpose of the sim is entertainment or decision-making. Either way, it could be that out-of-universe sentient beings actually enter this universe and interact with some of its intelligent entities. Interact with simulated people for fun. Or interact in order to try out different experiments of political development. In the latter case I would expect more rerunning of the same sim backed up to restart at the same point but with some alteration of what some people say or do.

So what's your (simulated) gut feeling? Are you in a sim? If so, what's it for?

Any thoughts?

A discussion of an applictation of Bayes' theorem to everyday life

9 AdeleneDawner 29 May 2011 06:04AM

[12:49:29 AM] Conversational Partner: actually, even if the praise is honest, it makes me uncomfortable if it seems excessive.  that is, repeated too often, or made a big deal about.

[12:49:58 AM] Adelene Dawner: 'Seems excessive' can actually be a cue for 'is insincere'.

[12:50:05 AM] Conversational Partner: oh

[12:50:25 AM] Adelene Dawner: That kind of praise tends to parse to me as someone trying to push my buttons.

[12:51:53 AM | Edited 12:52:09 AM] Conversational Partner: is it at least theoretically possible that the praise is honest, and the other person just happens to think that the thing is more praiseworthy than I do?  or if the other person has a different opinion than I do about how much praise is appropriate in general?

[12:52:59 AM] Adelene Dawner: Of course.

[12:53:13 AM] Adelene Dawner: This is a situation where looking at Bayes' theorem is useful.

continue reading »

Advice request: Homeownership

4 AdeleneDawner 27 May 2011 11:44AM

So I'm probably about two months from owning a home. (Realtor says we might close within a month; experienced friend says 3-6 weeks; I'll be vaguely surprised if it's not done by August.)

This is exciting, and also more than a little daunting. My near-mode brainbits don't know quite what to make of it; this is my first time owning anything on nearly this scope (I don't drive, so I've never owned a car), and also my first time taking on any large amount of debt. It's pretty obviously a good thing overall - my mortgage payment should be not much more than half of what I've been paying for my apartment, and I'll be in an area that's better by several relevant measurements, and I'll have more space and more freedom - but it's still a rather large change.

So, on behalf of those near-mode brainbits, which are mostly going aaaaaaaaah what have you gotten us into, I'd like to request any advice that you might have for a soon-to-be new homeowner.

(More information is available, but I'm not even sure what's important enough about the situation to mention.)

Rationalist horoscopes: A low-hanging utility generator.

64 AdeleneDawner 22 May 2011 09:37AM

The other day, I had an idea. It occurred to me that daily horoscopes - the traditional kind - might not be as useless as they seem at first glance: They usually give, or at least hint at, suggestions for specific things to do on a given day, which can be a useful cue, allowing the user to put less effort into finding something useful to do with their time. They can also act as a reminder of important concepts, rather like spaced repetition, and have the possibility of serendipitously giving the perfect advice in a situation where the user would otherwise not have thought to apply a particular concept.

This seems like something that many people here would find useful, if they weren't so vague, and if they were better calibrated to make useful suggestions. So, after getting some feedback, and with the help of PeerInfinity (who did most of the coding and is currently hosting the program), I put together a tool to provide us with a daily 'horoscope', chosen from a list provided by us and weighted toward advice that has been reported to work. The horoscopes are displayed here, with an RSS feed available here. Lists of the horoscopes in the program's database can be found here, with various sorting options.

One of the features of this program is that the chance of a given horoscope being displayed are affected by how well it has worked in the past. Every day, there is an option to vote on the previous day's horoscope, rating it as 'harmful', 'useless', 'sort of useful', 'useful', or 'awesome'. The 'harmful' and 'useless' options give the horoscope -15 and -1 points respectively, while the other three give it 1, 3, or 10 points. If a horoscope's score becomes negative, it is removed from the pool of active horoscopes; otherwise, its chance of being chosen is based on the average value of the votes it has received compared to the other horoscopes, disregarding recently-used ones.

There is still a need for good horoscopes to be added to the database. Horoscopes should offer a specific suggestion for something to do that will take less than an hour of sustained effort (all-day mindfulness-type exercises or 'be on the lookout for X' are fine) and that can be accomplished on the same day that the horoscope is read. Horoscopes should not make actual predictions, but may make prediction-like statements that are likely to be true on any given day, like "you will talk to a friend today". Horoscopes can be submitted here, or left in the comments. EDIT: Any comment anywhere on the site that contains the phrase "Horoscope version:" or "Horoscope:" should now automatically be emailed to me, so feel free to horoscope-ify new posts in their comments, unless this comes to be considered spam.

Rationalist Horoscopes: Low-hanging utility generator?

26 AdeleneDawner 18 May 2011 09:52PM

(5:29:23 PM) Adelene: ...horoscopes are for people who, like [mutual friend], prefer to have an authority tell them what to do. *blink*
(5:30:18 PM) Alicorn: *blink*
(5:30:50 PM) Adelene: This is an observation that my brain made just now, but it seems to make a fair bit of sense.
(5:31:18 PM) Alicorn: Plausible.
(5:32:21 PM) Adelene: Especially given that horoscopes seem not to actually make predictions about the future: They say 'X is a good thing to do today', not 'X will happen today'.
(5:32:36 PM) Alicorn: *nod*
(5:32:53 PM) Adelene: ...rationalist horoscopes?
(5:33:07 PM) Alicorn: like what?
(5:33:33 PM) Adelene: "Focus on granularizing your goals this week."
(5:33:43 PM) Alicorn: hmm
(5:34:09 PM) Alicorn: divided according to some mechanism like star sign, or no?
(5:34:38 PM) Adelene: The only advantage I see to that is that it may make it more emotionally plausible.
(5:35:06 PM) Adelene: There may be some other advantage to having different people doing different things at any given time tho.
(5:35:23 PM) Alicorn: According to [other friend], birth *season* has empirically interesting effects in a few areas...
(5:36:35 PM) Adelene: I don't think we can cash that out very well into advice, and anyway I expect that having that close of a similarity with actual horoscopes is likely to provoke a memetic immune response for most people. Could do it based on some kind of personality test tho.
(5:36:50 PM) Alicorn: *nod*
(5:39:08 PM) Adelene: Really, I think the bulk of the utility of such a thing would be in giving people generally-useful cues to work from - having any given day's horoscope (or whatever we'd call it) be randomly picked from a set of good ones that haven't been used recently should be just fine.
(5:39:15 PM) Alicorn: *nod*
(5:40:00 PM) Adelene: Maybe pair it with a rationality quote of the day, too.
(5:40:10 PM) Alicorn: Yessss.

I know it's a silly idea, but it seems like it might be useful. I've played with random quote dispensers in the past, and if they have a good list of quotes to start with they can be surprisingly useful, in my experience - the quote might be useless 9 times out of 10, but that tenth time, when it makes you realize that a connection exists that you never would have noticed otherwise, is pretty awesome. Something like a daily horoscope might have a similar effect, but in a more practical way, getting people to consider taking actions in contexts where they wouldn't usually consider those actions and occasionally finding an unusually good, but not intuitively obvious, match. And that's on top of any benefits that such a system would have for people who do work better when they're told what to do.

 

Thoughts?

Main site karma requirement for posting broken?

6 AdeleneDawner 09 May 2011 04:28PM

I just noticed that calcsam, who just posted two top posts in the main section of the site, only has the 100 karma that he has, so far, gained from those posts.

I don't object to those posts being there, but how did he do that?

Edit: Question answered; Eliezer mucked around with the karma system to make this possible in this specific case.

View more: Next