Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: December 2009

3 Post author: CannibalSmith 01 December 2009 04:25PM

ITT we talk about whatever.

Comments (263)

Comment author: CannibalSmith 01 December 2009 05:04:53PM 4 points [-]

I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.

Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.

Specifically, the two hardest problems that I see are:
1. Writing an AI that can learn how to move units efficiently on its own. Either by playing against itself or just searching the game tree. And I'm not just looking for what the best StarCraft players do - I'm searching for the optimum.
2. The exact rules of the game are not known. By exact I mean Laplace's Demon exact. It would take me way too long to discover them through experimentation and disassembly of the StarCraft executable. So, I either have to somehow automate this discovery or base my AI on a technique that doesn't need that.

Comment author: rwallace 01 December 2009 05:44:17PM 3 points [-]

Great! That competition looks like a lot of fun, and I wish you the best of luck with it.

As for advice, perhaps the best I can give you is to explain the characteristics the winning program will have.

It will make no, or minimal, use of game tree search. It will make no, or minimal, use of machine learning (at best it will do something like tuning a handful of scalar parameters with a support vector machine). It will use pathfinding, but not full pathfinding; corners will be cut to save CPU time. It will not know the rules of the game. Its programmer will probably not know the exact rules either, just an approximation discovered by trial and error. In short, it will not contain very much AI.

One reason for this is that it will not be running on a supercomputer, or even on serious commercial hardware; it will have to run in real time on a dinky beige box PC with no more than a handful of CPU cores and a few gigabytes of RAM. Even more importantly, only a year of calendar time is allowed. That is barely enough time for nontrivial development. It is not really enough time for nontrivial research, let alone research and development.

In short, you have to decide whether your priority is Starcraft or AI. I think it should be the latter, because that's what has actual value at the end of the day, but it's a choice you have to make. You just need to understand that the reward from the latter choice will be in long-term utility, not in winning this competition.

Comment author: CannibalSmith 01 December 2009 06:09:03PM *  3 points [-]

That's disheartening, but do give more evidence. To counter: participants of DARPA's Grand Challenge had just a year too, and their task was a notch harder. And they did use machine learning and other fun stuff.

Also, I think a modern gaming PC packs a hell of a punch. Especially with the new graphics cards that can run arbitrary code. But good catch - I'll inquire about the specs of the machines the competition will be held on.

Comment author: rwallace 01 December 2009 07:20:37PM 4 points [-]

The Grand Challenge teams didn't go from zero to victory in one year. They also weren't one-man efforts.

That having been said, and this is a reply to RobinZ also, for more specifics you really want to talk to someone who has written a real-time strategy game AI, or at least worked in the games industry. I recommend doing a search for articles or blog posts written by people with such experience. I also recommend getting hold of some existing game AI code to look at. (You won't be copying the code, but just to get a feel for how things are done.) Not chess or Go, those use completely different techniques. Real-time strategy games would be ideal, but failing that, first-person shooters or turn-based strategy games - I know there are several of the latter at least available as open source.

Oh, and Johnicholas gives good advice, it's worth following.

Comment author: CannibalSmith 01 December 2009 08:57:59PM *  1 point [-]

The Grand Challenge teams didn't go from zero to victory in one year.

Stanford's team did.

They also weren't one-man efforts.

Neither is mine.

I do not believe I can learn much from existing RTS AIs because their goal is entertaining the player instead of winning. In fact, I've never met an AI that I can't beat after a few days of practice. They're all the same: build a base and repeatedly throw groups of units at the enemy's defensive line until run out of resources, mindlessly following the same predictable route each time. This is true for all of Command & Conquer series, all of Age of Empires series, all of Warcraft series, and StarCraft too. And those are the best RTS games in the world with the biggest budgets and development teams.

But I will search around.

Comment author: DanArmak 01 December 2009 09:29:49PM 1 point [-]

Was these games' development objective to make the best AI they could that would win in all scenarios? I doubt that would be the most fun for human players to play against. Maybe humans wanted a predictable opponent.

Comment author: ChrisPine 02 December 2009 11:40:22AM 2 points [-]

They want a fun opponent.

In games with many players (where alliances are allowed), you could make the AI's more likely to ally with each other and to gang up on the human player. This could make an 8-player game nearly impossible. But the goal is not to beat the human. The goal is for the AI to feel real (human), and be fun.

As you point out, the goal in this contest is very different.

Comment author: rwallace 01 December 2009 10:36:53PM *  0 points [-]

Stanford's team did.

Ah, I had assumed they must have been working on the problem before the first one, but their webpage confirms your statement here. I stand corrected!

Neither is mine.

Good, that will help.

I do not believe I can learn much from existing RTS AIs because their goal is entertaining the player instead of winning. In fact, I've never met an AI that I can't beat after a few days of practice. They're all the same: build a base and repeatedly throw groups of units at the enemy's defensive line until run out of resources, mindlessly following the same predictable route each time.

Yeah. Personally I never found that very entertaining :-) If you can write one that does better, maybe the industry might sit up and take notice. Best of luck with the project, and let us know how it turns out.

Comment author: SilasBarta 01 December 2009 11:38:28PM 2 points [-]

Please fix this post's formatting. I beg you.

Comment author: rwallace 01 December 2009 11:50:46PM 0 points [-]

What's the recommended way to format quoted fragments on this site to distinguish them from one's own text? I tried copy pasting CannibalSmith's comment, but that copied as indentation with four spaces, which when I used it, gave a different result.

Comment author: Jayson_Virissimo 01 December 2009 11:55:36PM 2 points [-]

Click on the reply button and then click the help link in the bottom right corner. It explains how to properly format your comments.

Comment author: rwallace 02 December 2009 08:39:41AM 0 points [-]

Okay, thanks, fixed.

Comment author: RobinZ 01 December 2009 06:17:17PM 2 points [-]

Strictly speaking, this reads a lot like advice to sell nonapples. I'll grant you that it's probably mostly true, but more specific advice might be helpful.

Comment author: Johnicholas 01 December 2009 06:47:11PM *  10 points [-]

I have some advice.

  1. Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.

  2. A good dataset is incredibly valuable. When starting to attack a problem - both the whole thing, and subproblems that will arise - build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.

  3. Succeed "instantaneously" - and don't break it. Make getting to "victory" - a complete entry - your first priority and aim to be done with it in a day or a week. Often, there's temptation to do a lot of "foundational" work before getting something complete working, or a "big refactoring" that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you're not breaking it.

Comment author: ShardPhoenix 04 December 2009 08:58:25AM 0 points [-]

There's some discussion and early examples here: http://www.teamliquid.net/forum/viewmessage.php?topic_id=105570

You might also look at some of the custom AIs for Total Annihilation and/or Supreme Commander, which are reputed to be quite good.

Ultimately though the winner will probably come down to someone who knows Starcraft well enough to thoroughly script a bot, rather than more advanced AI techniques. It might be easier to use proper AI in the restricted tournaments, though.

Comment author: Alicorn 01 December 2009 05:55:11PM 6 points [-]

I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it's still subject to revision.

Note: The protagonist's name is "Key". Key, and one other character, receive Spivak pronouns, which can make either Key's name or eir pronouns look like some kind of typo or formatting error if you don't know it's coming. If this annoys enough people, I may change Key's name or switch to a different genderless pronoun system. I'm curious if anyone finds that they think of Key and the other Spivak character as having a particular gender in the story; I tried to write them neither, but may have failed (I made errors in the pronouns in the first draft, and they all went in one direction).

Comment author: DanArmak 01 December 2009 06:17:34PM 0 points [-]

The main feeling I came away with is... so what? It didn't convey any ideas or viewpoints that were new to me; it didn't have any surprising twists or revelations that informed earlier happenings. What is the target audience?

The Spivak pronouns are nice; even though I don't remember encountering them before I feel I could get used to them easily in writing, so (I hope) a transition to general use isn't impossible.

I'm curious if anyone finds that they think of Key and the other Spivak character as having a particular gender in the story

The general feeling I got from Key is female. I honestly don't know why that is. Possibly because the only other use of Key as a personal name that comes to mind is a female child? Objectively, the society depicted is different enough from any contemporary human society to make male vs. female differences (among children) seem small in comparison.

Comment author: Alicorn 01 December 2009 07:21:56PM *  0 points [-]

Target audience - beats me, really. It's kind of set up to preach to the choir, in terms of the "moral". I wrote it because I was pretty sure I could finish it (and I did), and I sorely need to learn to finish stories; I shared it because I compulsively share anything I think is remotely decent.

The general feeling I got from Key is female. I honestly don't know why that is.

Hypotheses: I myself am female. Lace, the only gendered character with a speaking role, is female. Key bakes cupcakes at one point in the story and a stereotype is at work. (I had never heard of Key the Metal Idol.)

Comment author: DanArmak 01 December 2009 07:50:49PM 0 points [-]

Could be. I honestly don't know. I didn't even consciously remember Key baking cupcakes by the time the story ended and I asked myself what might have influenced me.

I also had the feeling that the story wasn't really about Key; ey just serves as an expository device. Ey has no unpredictable or even unusual reactions to anything that would establish individuality. The setting should then draw the most interest, and it didn't do enough that, because it was too vague. What is the government? How does it decide and enforce allowed research, and allowed self-modification? How does sex-choosing work? What is the society like? Is Key forced at a certain age to be in some regime, like our schools? If not, are there any limits on what Key or her parents do with her life?

As it is, the story presented a very few loosely connected facts about Key's world, and that lack of detail is one reason why these facts weren't interesting: I can easily imagine some world with those properties.

Comment author: Alicorn 01 December 2009 08:03:15PM -1 points [-]

What is the government?

Small communities, mostly physically isolated from each other, but informationally connected and centrally administered. Basically meritocratic in structure - pass enough of the tests and you can work for the gubmint.

How does it decide and enforce allowed research, and allowed self-modification?

Virtually all sophisticated equipment is communally owned and equipped with government-designed protocols. Key goes to the library for eir computer time because ey doesn't have anything more sophisticated than a toaster in eir house. This severely limits how much someone could autonomously self-modify, especially when the information about how to try it is also severely limited. The inconveniences are somewhat trivial, but you know what they say about trivial inconveniences. If someone got far enough to be breaking rules regularly, they'd make people uncomfortable and be asked to leave.

How does sex-choosing work?

One passes some tests, which most people manage between the ages of thirteen and sixteen, and then goes to the doctor and gets some hormones and some surgical intervention to be male or female (or some brand of "both", and some people go on as "neither" indefinitely, but those are rarer).

What is the society like?

Too broad for me to answer - can you be more specific?

Is Key forced at a certain age to be in some regime, like our schools? If not, are there any limits on what Key or her parents do with her life?

Education is usually some combination of self-directed and parent-encouraged. Key's particularly autonomous and eir mother doesn't intervene much. If Key did not want to learn anything, eir mother could try to make em, but the government would not help. If Key's mother did not want em to learn anything and Key did, it would be unlawful for her to try to stop em. There are limits in the sense that Key may not grow up to be a serial killer, but assuming all the necessary tests get passed, ey can do anything legal ey wants.

Thank you for the questions - it's very useful to know what questions people have left after I present a setting! My natural inclination is massive data-dump. This is an experiment in leaving more unsaid, and I appreciate your input on what should have been dolloped back in.

Comment author: DanArmak 01 December 2009 08:39:55PM 2 points [-]

Small communities, mostly physically isolated from each other, but informationally connected and centrally administered. Basically meritocratic in structure - pass enough of the tests and you can work for the gubmint.

Reminds me of old China...

Virtually all sophisticated equipment is communally owned and equipped with government-designed protocols.

That naturally makes me curious about how they got there. How does a government, even though unelected, go about impounding or destroying all privately owned modern technology? What enforcement powers have they got?

Of course there could be any number of uninteresting answers, like 'they've got a singleton' or 'they're ruled by an AI that moved all of humanity into a simulation world it built from scratch'.

And once there, with absolute control over all communications and technology, it's conceivable to run a long-term society with all change (incl. scientific or technological progress) being centrally controlled and vetoed. Still, humans have got a strong economical competition drive, and science & technology translate into competitive power. Historically, eliminating private economic enterprise takes enormous effort - the big Communist regimes in USSR, and I expect in China as well, never got anywhere near success on that front. What do these contended pain-free people actually do with their time?

Comment author: Alicorn 01 December 2009 08:50:46PM *  0 points [-]

How does a government, even though unelected, go about impounding or destroying all privately owned modern technology? What enforcement powers have they got?

It was never there in the first place. The first inhabitants of these communities (which don't include the whole planet; I imagine there are a double handful of them on most continents - the neuros and the genderless kids are more or less universal, though) were volunteers who, prior to joining under the auspices of a rich eccentric individual, were very poor and didn't have their own personal electronics. There was nothing to take, and joining was an improvement because it came with access to the communal resources.

Of course there could be any number of uninteresting answers, like 'they've got a singleton' or 'they're ruled by an AI that moved all of humanity into a simulation world it built from scratch'.

Nope. No AI.

What do these contended pain-free people actually do with their time?

What they like. They go places, look at things, read stuff, listen to music, hang out with their friends. Most of them have jobs. I find it a little puzzling that you have trouble thinking of how one could fill one's time without significant economic competition.

Comment author: DanArmak 01 December 2009 09:39:40PM 2 points [-]

It was never there in the first place.

Oh. So these communities, and Key's life, are extremely atypical of that world's humanity as a whole. That's something worth stating because the story doesn't even hint at it.

I'd be interested in hearing about how they handle telling young people about the wider world. How do they handle people who want to go out and live there and who come back one day? How do they stop the governments of the nations where they actually live from enforcing laws locally? Do these higher-level governments not have any such laws?

I find it a little puzzling that you have trouble thinking of how one could fill one's time without significant economic competition.

Many people can. I just don't find it convincing that everyone could without there being quite a few unsatisfied people around.

Comment author: Zack_M_Davis 01 December 2009 09:49:02PM 2 points [-]

The exchange above reminds me of Robin Hanson's criticism of the social science in Greg Egan's works.

Comment author: Alicorn 01 December 2009 09:54:27PM -1 points [-]

I'd be interested in hearing about how they handle telling young people about the wider world.

The relatively innocuous information about the wider world is there to read about on the earliest guidelists; less pleasant stuff gets added over time.

How do they handle people who want to go out and live there and who come back one day?

You can leave. That's fine. You can't come back without passing more tests. (They are very big on tests.)

How do they stop the governments of the nations where they actually live from enforcing laws locally?

They aren't politically components of other nations. The communities are all collectively one nation in lots of geographical parts.

Many people can. I just don't find it convincing that everyone could without there being quite a few unsatisfied people around.

They can leave. The communities are great for people whose priorities are being content and secure. Risk-takers and malcontents can strike off on their own.

Comment author: DanArmak 01 December 2009 10:11:01PM 0 points [-]

They aren't politically components of other nations. The communities are all collectively one nation in lots of geographical parts.

I wish our own world was nice enough for that kind of lifestyle to exist (e.g., purchasing sovereignity over pieces of settle-able land; or existing towns seceding from their nation)... It's a good dream :-)

Comment author: Kaj_Sotala 20 December 2009 04:23:55PM 0 points [-]

Oh. So these communities, and Key's life, are extremely atypical of that world's humanity as a whole. That's something worth stating because the story doesn't even hint at it.

I disagree: it doesn't matter for the story whether the communities are typical or atypical for humanity as a whole, so mentioning it is unnecessary.

Comment author: rwallace 01 December 2009 07:23:34PM 2 points [-]

You traded off a lot of readability for the device of making the protagonist's gender indeterminate. Was this intended to serve some literary purpose that I'm missing? On the whole the story didn't seem to be about gender.

I also have to second DanArmak's comment that if there was an overall point, I'm missing that also.

Comment author: Alicorn 01 December 2009 07:28:44PM 2 points [-]

Key's gender is not indeterminate. Ey is actually genderless. I'm sorry if I didn't make that clear - there's a bit about it in eir second conversation with Trellis.

Comment author: Liron 03 December 2009 06:11:08PM 2 points [-]

Your gender pronouns just sapped 1% of my daily focusing ability.

Comment author: gwern 11 December 2009 06:54:46PM 0 points [-]

I'm sorry if I didn't make that clear - there's a bit about it in eir second conversation with Trellis.

I thought it was pretty clear. The paragraph about 'boy or girl' make it screamingly obvious to me, even if the Spivak or general gender-indeterminacy of the kids hadn't suggested it.

Comment author: [deleted] 02 December 2009 07:05:42AM 1 point [-]

Gur qrngu qvqa'g srry irel qrngu-yvxr. Vg frrzrq yvxr gur rzbgvba fheebhaqvat vg jnf xvaq bs pbzcerffrq vagb bar yvggyr ahttrg gung V cerggl zhpu fxvzzrq bire. V jnf nyfb rkcrpgvat n jbeyq jvgubhg qrngu, juvpu yrsg zr fhecevfrq. Va gur erny jbeyq, qrngu vf bsgra n fhecevfr, ohg yvxr va gur erny jbeyq, fhqqra qrngu va svpgvba yrnirf hf jvgu n srryvat bs qvforyvrs. Lbh pbhyq unir orra uvagvat ng gung qrngu sebz gur svefg yvar.

Nyfb, lbh xvaq bs qevir evtug cnfg gur cneg nobhg birecbchyngvba. V guvax birecbchyngvba vf zl zbgure'f znva bowrpgvba gb pelbavpf.

Comment author: gwern 11 December 2009 06:52:51PM 0 points [-]

Alicorn goes right past it probably because she's read a fair bit of cyronics literature herself and has seen the many suggestions (hence the librarian's invitation to think of 'a dozen solutions'), and it's not the major issue anyway.

Comment author: ChrisPine 02 December 2009 12:20:08PM 1 point [-]

I liked it. :)

Part of the problem that I had, though, was the believability of the kids: kids don't really talk like that: "which was kind of not helpful in the not confusing me department, so anyway"... or, in an emotionally painful situation:

Key looked suspiciously at the librarian. "You sound like you're trying not to say something."

Improbably astute, followed by not seeming to get the kind of obvious moral of the story. At times it felt like it was trying to be a story for older kids, and at other times like it was for adults.

The gender issue didn't seem to add anything to the story, but it only bothered me at the beginning of the story. Then I got used to it. (But if it doesn't add to the story, and takes getting used to... perhaps it shouldn't be there.)

Anyway, I enjoyed it, and thought it was a solid draft.

Comment author: Alicorn 02 December 2009 02:47:23PM 0 points [-]

You've hit on one of my writing weaknesses: I have a ton of trouble writing people who are just plain not very bright or not very mature. I have a number of characters through whom I work on this weakness in (unpublished portions of) Elcenia, but I decided to let Key be as smart I'm inclined to write normally for someone of eir age - my top priority here was finishing the darn thing, since this is only the third short story I can actually claim to have completed and I consider that a bigger problem.

Comment author: Blueberry 04 December 2009 04:36:31AM 1 point [-]

I actually have to disagree with this. I didn't think Key was "improbably astute". Key is pretty clearly an unusual child (at least, that's how I read em). Also, the librarian was pretty clearly being elliptical and a little patronizing, and in my experience kids are pretty sensitive to being patronized. So it didn't strike me as unbelievable that Key would call the librarian out like that.

Comment author: Jack 02 December 2009 12:50:35PM 0 points [-]

Cool. I also couldn't help reading Key as female. My hypothesis would be that people generally have a hard time writing characters of the opposite sex. Your gender may have leaked in. The Spivak pronouns were initially very distracting but were okay after a couple paragraphs. If you decide to change it Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns. But that might not work in a short story without exposition.

Comment author: Alicorn 02 December 2009 02:43:57PM 0 points [-]

I do typically have an easier time writing female characters than male ones. I probably wouldn't have tried to write a story with genderless (human) adults, but in children I figured I could probably manage it. (I've done some genderless nonhuman adults before and I think I pulled them off.)

Comment author: CronoDAS 02 December 2009 07:25:19PM 1 point [-]

I think Key's apparent femininity might come from a lack of arrogance. Compare Key to, say, Calvin from "Calvin and Hobbes". Key is extremely polite, willing to admit to ignorance, and seems to project a bit of submissiveness. Also, Key doesn't demonstrate very much anger over Trellis's death.

I probably wouldn't have given the subject a second thought, though, if it wasn't brought up for discussion here.

Comment author: Alicorn 02 December 2009 09:47:48PM 0 points [-]

Everyone's talking about Key - did anyone get an impression from Trellis?

Comment author: CronoDAS 03 December 2009 03:57:44AM 2 points [-]

If I had to put a gender on Trellis, I'd say that Trellis was more masculine than feminine. (More like Calvin than like Suzie.) Overall, though, it's fairly gender-neutral writing.

Comment author: gwern 11 December 2009 06:56:23PM 0 points [-]

I too got the 'dull sidekick' vibe, and since dull sidekicks are almost always male these days...

Comment author: DanArmak 02 December 2009 10:37:51PM 1 point [-]

Le Guin pretty successfully wrote a whole planet of androgyns using masculine pronouns.

In Left Hand of Darkness, the narrator is an offplanet visitor and the only real male in the setting. He starts his tale by explicitly admitting he can't understand or accept the locals' sexual selves (they become male or female for short periods of time, a bit like estrus). He has to psychologically assign them sexes, but he can't handle a female-only society, so he treats them all as males. There are plot points where he fails to respond appropriately to the explicit feminine side of locals.

This is all very interesting and I liked the novel, but it's the opposite of passing androgyns as normal in writing a tale. Pronouns are the least of your troubles :-)

Comment author: Jack 02 December 2009 10:59:19PM 0 points [-]

Very good points. It has been a while since I read it.

Comment author: NancyLebovitz 04 December 2009 02:47:44PM 0 points [-]

Later, LeGuin said that she was no longer satisfied with the male pronouns for the Gethenians.

Comment author: Eliezer_Yudkowsky 02 December 2009 12:54:15PM *  0 points [-]

Replied in PM, in case you didn't notice (click your envelope).

PS: My mind didn't assign a sex to Key. Worked with me, anyway.

Comment author: RichardKennaway 02 December 2009 05:12:55PM 0 points [-]

At 3800 words, it's too long for the back page of Nature, but a shorter version might do very well there.

Comment author: Zack_M_Davis 02 December 2009 07:41:46PM 1 point [-]

I love the new gloss on "What do you want to be when you grow up?"

or switch to a different genderless pronoun system.

Don't. Spivak is easy to remember because it's just they/them/their with the ths lopped off. Nonstandard pronouns are difficult enough already without trying to get people to remember sie and hir.

Comment author: anonym 02 December 2009 10:12:11PM 0 points [-]

Totally agreed. Spivak pronouns are the only ones I've seen that took almost no effort to get used to, for exactly the reason you mention.

Comment author: Emily 03 December 2009 04:42:45AM 0 points [-]

I enjoyed it. I made an effort to read Key genderlessly. This didn't work at first, probably because I found the Spivak pronouns quite hard to get used to, and "ey" came out as quite male to me, then fairly suddenly flipped to female somewhere around the point where ey was playing on the swing with Trellis. I think this may have been because Trellis came out a little more strongly male to me by comparison (although I was also making a conscious effort to read ey genderlessly). But as the story wore on, I improved at getting rid of the gender and by the end I no longer felt sure of either Key or Trellis.

Point of criticism: I didn't find the shift between what was (to me) rather obviously the two halves of the story very smooth. The narrative form appeared to take a big step backwards from Key after the words "haze of flour" and never quite get back into eir shoes. Perhaps that was intentional, because there's obviously a huge mood shift, but it left me somewhat dissatisfied about the resolution of the story. I felt as though I still didn't know what had actually happened to the original Key character.

Comment author: LucasSloan 03 December 2009 04:54:32AM 1 point [-]

For me, both of the characters appeared female.

Sbe zr gur fgbel fbeg bs oebxr qbja whfg nf Xrl'f sevraq jnf xvyyrq. Vg frrzrq gbb fbba vagb gur aneengvir gb znxr fhpu n znwbe punatr. Nyfb, jvgu erfcrpg gb gur zbeny, vg frrzrq vafhssvpvragyl fubja gung lbh ernyyl ertneq cnva nf haqrfvenoyr - vg frrzrq nf gubhtu lbh pbhyq or fnlvat fbzrguvat nybat gur yvarf bs "gurl whfg qba'g haqrefgnaq." Orpnhfr bs gung nf jryy nf gur ehfurq srry bs gur raqvat, vg fbeg bs pnzr bss nf yrff rzbgvbanyyl rssrpgvir guna vg pbhyq.

Comment author: Blueberry 04 December 2009 04:34:10AM 1 point [-]

Looks like I'm in the minority for reading Key as slightly male. I didn't get a gender for Trellis. I also read the librarian as female, which I'm kind of sad about.

I loved the story, found it very touching, and would like to know more about the world it's in. One thing that confused me: the librarian's comments to Key suggested that some actual information was withheld from even the highest levels available to "civilians". So has someone discovered immortality, but some ruling council is keeping it hidden? Or is it just that they're blocking research into it, but not hiding any actual information? Are they hiding the very idea of it? And what's the librarian really up to?

Were you inspired by Nick Bostrom's "Fable of the Dragon"? It also reminded me a little of Lois Lowry's "The Giver".

Thanks so much for sharing it with us!

Comment author: Alicorn 04 December 2009 04:44:31AM 0 points [-]

I also read the librarian as female, which I'm kind of sad about.

Lace is female - why are you sad about reading her that way?

I loved the story, found it very touching, and would like to know more about the world it's in.

Yaaaay! I'll answer any setting questions you care to pose :)

So has someone discovered immortality, but some ruling council is keeping it hidden? Or is it just that they're blocking research into it, but not hiding any actual information? Are they hiding the very idea of it? And what's the librarian really up to?

Nobody has discovered it yet. The communities in which Key's ilk live suppress the notion of even looking for it; in the rest of the world they're working on it in a few places but aren't making much progress. The librarian isn't up to a whole lot; if she were very dedicated to finding out how to be immortal she'd have ditched the community years ago - she just has a few ideas that aren't like what the community leaders would like her to have and took enough of a shine to Key that she wanted to share them with em. I have read both "Fable of the Dragon" and "The Giver" - the former I loved, the latter I loved until I re-read it with a more mature understanding of worldbuilding, but I didn't think of either consciously when writing.

You are most welcome for the sharing of the story. Have a look at my other stuff, if you are so inclined :)

Comment author: NancyLebovitz 04 December 2009 03:05:50PM 0 points [-]

I enjoyed the story-- it was an interesting world. By the end of the story, you were preaching to a choir I'm in.

None of the characters seemed strongly gendered to me.

I was expecting opposition to anesthesia to include religiously based opposition to anesthesia for childbirth, and for the whole idea of religion to come as a shock. On the other hand, this might be cliched thinking on my part. Do they have religion?

The neuro couldn't be limited to considered reactions-- what about the very useful fast reflexive reaction to pain?

Your other two story links didn't open.

Comment author: Alicorn 04 December 2009 03:09:29PM *  0 points [-]

Religion hasn't died out in this setting, although it's uncommon in Key's society specifically. Religion was a factor in historical opposition to anesthesia (I'm not sure of the role it plays in modern leeriness about painkillers during childbirth) but bringing it up in more detail would have added a dimension to the story I didn't think it needed.

Reflexes are intact. The neuro just translates the qualium into a bare awareness that damage has occurred. (I don't know about everyone, but if I accidentally poke a hot burner on the stove, my hand is a foot away before I consciously register any pain. The neuro doesn't interfere with that.)

I will check the links and see about fixing them; if necessary, I'll HTMLify those stories too. ETA: Fixed; they should be downloadable now.

Comment author: arundelo 04 December 2009 10:13:30PM 0 points [-]

Great story!

I kept thinking of Key as female. This may be because I saw some comments here that saw em as female, or because I know that you're female.

The other character I didn't assign a sex to.

Comment author: Larks 04 December 2009 11:18:56PM 0 points [-]

I got used to the Spivak after a while, and while it'd be optimal for an audience used to it, it did detract a little at first. On the whole I'd say it's necessary though (if you were going to use a gender'd pronoun, I'd use female ones)

I read Key as mainly female, and Trellis as more male- it would be interesting to know how readers' perceptions correlated with their own gender.

The children seemed a little mature, but I thought they'd had a lot better education, or genetic enhancement or something. I think spending a few more sentences on the important events would be good though- otherwise one can simply miss them.

I think you were right to just hint at the backstory- guessing is always fun, and my impression of the world was very similar to that which you gave in one of the comments.

Comment author: Kaj_Sotala 20 December 2009 04:29:58PM 0 points [-]

Finally got around reading the story. I liked it, and finishing it gave me a wild version of that "whoa" reaction you get when you've doing something emotionally immersive and then switch to some entirely different activity.

I read Key as mostly genderless, possibly a bit female because the name sounded feminine to me. Trellis, maybe slightly male, though that may also have been from me afterwards reading the comments about Trellis feeling slightly male and those contaminating the memory.

I do have to admit that the genderless pronouns were a bit distracting. I think it was the very fact that they were shortened version of "real" pronouns that felt so distracting - my mind kept assuming that it had misread them and tried to reread. In contrast, I never had an issue with Egan's use of ve / ver / vis / vis / verself.

Comment author: SilasBarta 01 December 2009 07:45:46PM *  1 point [-]

By coincidence, two blog posts went up today that should be of interest to people here.

Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.

Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.

(Needless to say, I don't agree with either of these arguments, but they're great for application of your own rationality.)

Comment author: DanArmak 01 December 2009 07:53:54PM 2 points [-]

The second link doesn't load; should be this.

Comment author: SilasBarta 01 December 2009 08:13:25PM 0 points [-]

Thanks! Fixed.

Comment author: Matt_Simpson 02 December 2009 05:24:21AM 2 points [-]

Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW...

That's not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal. The alternative possibility - that the scientists really are smearing the opposition with no good reason - is far more likely, and thus the net effect on our posteriors is to reduce them - or at least keep them the same if you agree with Robin Hanson.

Here's (part of) what Tyler actually said:

Another response, not entirely out of the ballpark, is: 2. "These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for. I will revise upward my estimate of the seriousness of the problem."

I am not saying that #2 is correct, I am only saying that #2 deserves more than p = 0. Yet I have not seen anyone raise the possibility of #2.

Comment author: SilasBarta 02 December 2009 05:39:09PM *  -1 points [-]

Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW...

That's not what he is saying. His argument is not that the hacked emails actually should raise our confidence in AGW. His argument is that there is a possible scenario under which this should happen, and the probability that this scenario is true is not infinitesimal

Right -- that is what I called "giving a reason why the hacked emails..." and I believe that characterization is accurate: he's described a reason why they would raise our confidence in AGW.

The alternative possibility - that the scientists really are smearing the opposition with no good reason - is far more likely, and thus the net effect on our posteriors is to reduce them

This is reason why Tyler's argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.

I think we agree on the substance.

Comment author: Matt_Simpson 02 December 2009 10:12:49PM *  1 point [-]

The alternative possibility - that the scientists really are smearing the opposition with no good reason - is far more likely, and thus the net effect on our posteriors is to reduce them

This is reason why Tyler's argument for a positive Bayes factor is in error, not a reason why my characterization was inaccurate.

Tyler isn't arguing for a positive Bayes factor. (I assume that by "Bayes factor" you mean the net effect on the posterior probability). He posted a followup because many people misunderstood him. Excerpt:

I did not try to justify any absolute level of belief in AGW, or government spending for that matter. I'll repeat my main point about our broader Bayesian calculations:

I am only saying that #2 [scientists behaving badly because they think the future of the world is at stake] deserves more than p = 0.

edited to add:

I'm not sure I understand you're criticism, so here's how I understood his argument. There are two major possibilities worth considering:

1) "These people behaved dishonorably.

and

2) "These people behaved dishonorably. They must have thought this issue was really important, worth risking their scientific reputations for.

Then the argument goes that the net effect of 1 is to lower our posteriors for AGW while the net effect of 2 is to raise them.

Finally, p(2 is true) != 0.

This doesn't tell us the net effect of the event on our posteriors - for that we need p(1), p(2) and p(anything else). Presumably, Tyler thinks p(anything else) ~ 0, but that's a side issue.

Is this how you read him? If so, which part do you disagree with?

Comment author: SilasBarta 02 December 2009 11:04:11PM *  1 point [-]

I assume that by "Bayes factor" you mean the net effect on the posterior probability).

I'm using the standard meaning: for a hypothesis H and evidence E, the bayes factor is p(E|H)/p(E|~H). It's easiest to think of it as the factor you mutiply your prior odds to get posterior odds. (Odds, not probabilities.) Which means I goofed and said "positive" when I meant "above unity" :-/

I read Tyler as not knowing what he's talking about. For one thing, do you notice how he's trying to justify why something should have p>0 under a Bayesian analysis ... when Bayesian inference already requires p's to be greater than zero?

In his original post, he was explaining a scenario under which seeing fraud should make you raise your p(AGW). Though he's not thinking clearly enough to say it, this is equivalent to describing a scenario under which the Bayes factor is greater than unity. (I admit I probably shouldn't have said "argument for >1 Bayes factor", but rather, "suggestion of plausibility of >1 Bayes factor")

That's the charitable interpretation of what he said. If he didn't mean that, as you seem to think, then he's presenting metrics that aren't helpful, and this is clear when he think's its some profound insight to put p(fraud due to importance of issue) greater than zero. Yes, there are cases where AGW is true despite this evidence -- but what's the impact on the Bayes factor?

Why should we care about arbitrarily small probabilities?

Tyler was not misunderstood: he used probability and Bayesian inference incorrectly and vacuously, then tried to backpedal. ( My comment on page 2.)

Anyway, I think we agree on the substance:

  • The fact that the p Tyler referred to is greater than zero is insufficient information to know how to update.
  • The scenario Tyler described is insufficient to give Climategate a Bayes factor above 1.

(I was going to the drop the issue, but you seem serious about de-Aumanning this, so I gave a full reply.)

Comment author: Matt_Simpson 03 December 2009 05:40:18AM *  1 point [-]

I think we are arguing past each other, but it's about interpreting someone else so I'm not that worried about it. I'll add one more bullet to your list to clarify what I think Tyler is saying. If that doesn't resolve it, oh well.

  • If we know with certainty that the secenario that Tyler described is true, that is if we know that the scientists fudged things because they knew that AGW was real and that the consequences were worth risking their reputations on, then Climategate has a Bayes factor above 1.

I don't think Tyler was saying anything more than that. (Well, and P(his scenario) is non-negligible)

Comment author: Jordan 01 December 2009 08:49:36PM *  0 points [-]

Does anyone know how many neurons various species of birds have? I'd like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.

Comment author: bogdanb 02 December 2009 03:35:29PM 2 points [-]

I've looked for an hour and it seems really hard to find. From what I've seen, (a) birds have a different brain structure than mammals (“general intelligence” originates in other parts of the brain), and (b) their neuron count changes hugely (relative to mammals) during their lifetimes. I've seen lots of articles giving numbers for various species and various brain components, but nothing in aggregate. If you really want a good estimate you'll have to read up to learn the brain structure of birds, and use that together with neuron counts for different parts to gather a total estimate. Google Scholar might help in that endeavor.

Comment author: anonym 05 December 2009 08:31:55AM *  0 points [-]

I also looked for a while and had little luck. I did find though that the brain-to-body-mass ratios for two of the smartest known species of birds -- the Western Scrub Jay, and the New Caledonian Crow -- are comparable to those of the chimps. These two species have shown very sophisticated cognition.

Comment author: Jordan 06 December 2009 07:01:42AM 0 points [-]

Blast.

I'll have to keep the question in mind for the next time I run into a neuroscientist.

Comment author: bgrah449 01 December 2009 11:06:44PM 2 points [-]

I like the color red. When people around me wear red, it makes me happy - when they wear any other color, it makes me sad. I crunch some numbers and tell myself, "People wear red about 15% of the time, but they wear blue 40% of the time." I campaign for increasing the amount that people wear red, but my campaign fails miserably.

"It'd be great if I could like blue instead of red," I tell myself. So I start trying to get myself to like blue - I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.

What just happened? Did a belief or a preference change?

Comment author: Alicorn 01 December 2009 11:25:45PM 8 points [-]

You acquired a second-order desire, which is a preference about preferences.

Comment author: byrnema 02 December 2009 06:59:37AM *  0 points [-]

A poem, not a post:

Intelligence is not computation.

As you know.

Yet the converse bears … contemplation, reputation. Only then refutation.

We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.

We consider this; an error in logic, an error in logic.

Even though! we know: intelligence is not computation.

Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, without question, convincing, final,

We have no Method, no method for their generation. As well we know:

No computation can possibly be found to generate, not possibly. Not systematically, not even with ingenuity. Yet, how and why do we know this -- this Impossibility?

Intelligence is leaping, guessing, placing the foot unexpectedly yet correctly. Which you find verified always afterwards, not before.

Of course that’s why humans don’t calculate correctly.

But we knew that.

You and I, being too logical about it, pretending that computation is intelligence.

But we know that; already, everything. That pretending is the part of intelligence not found in the Computating. Yet, so? We’ll pretend that intelligence is computing and we’ll see where the computation fails! Telling us what we already knew but a little better.

Than before, we’ll see afterwards. How ingenuous, us.

The computation will tell us, finally so, we'll pretend.

Comment author: rhollerith_dot_com 02 December 2009 07:19:13AM 0 points [-]

I do not like most poems, but I liked this one.

Comment author: byrnema 02 December 2009 05:58:54PM *  2 points [-]

I wrote this poem yesterday in an unusual mood. I don't entirely agree with it today. Or at least, I would qualify it.

What is meant by computation? When I wrote that intelligence is not computation, I must have meant a certain sort of computation because of course all thought is some kind of computation.

To what extent has distinction been made between systematic/linear/deductive thought (which I am criticizing as obviously limited in the poem) and intelligent pattern-based thought? Has there been any progress in characterizing the latter?

For example, consider the canonical story about Gauss. To keep him busy with a computation, his math teacher told him to add all the numbers from 1 to 100. Instead, according to the story, Gauss added the first number and the last number, multiplied by 100 and divided by 2. Obviously, this is a computation. But yet a different sort. To what extent do you suppose he logically deduced the pattern of the lowest number and highest number combining always to single value or just guessed/observed it was a pattern that might work? And then found that it did work inductively?

I'm very interested in characterizing the difference between these kinds of computation. Intelligent thinking seems to really be guesses followed by verification, not steady linear deduction.

Comment author: JamesAndrix 03 December 2009 03:12:12AM *  1 point [-]

What is meant by computation? When I wrote that intelligence is not computation, I must have meant a certain sort of computation because of course all thought is some kind of computation.

Gah, Thank You. Saves me the trouble of a long reply. I'll upvote for a change-of-mind disclaimer in the original.

Intelligent thinking seems to really be guesses followed by verification, not steady linear deduction.

My recent thoughts have been along these lines, but this is also what evolution does. At some point, the general things learned by guessing have to be incorporated into the guess-generating process.

Comment author: gwern 12 December 2009 03:05:46AM *  2 points [-]

While reading a collection of Tom Wayman's poetry, suddenly a poem came to me about Hal Finney ("Dying Outside"); since we're contributing poems, I don't feel quite so self-conscious. Here goes:

He will die outside, he says.
Flawed flesh betrayed him,
it has divorced him -
for the brain was left him,
but not the silverware
nor the limbs nor the car.
So he will take up
a brazen hussy,
tomorrow's eve,
a breather-for-him,
a pisser-for-him.
He will be letters,
delivered slowly;
deliberation
his future watch-word.
He would not leave until he left this world.
I try not to see his mobile flesh,
how it will sag into eternal rest,
but what he will see:
symbol and symbol, in their endless braids,
and with them, spread over strange seas of thought
mind (not body), forever voyaging.

http://www.gwern.net/fiction/Dying%20Outside

Comment author: Yorick_Newsome 02 December 2009 11:06:32AM *  2 points [-]

Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?

Comment author: Jack 02 December 2009 11:37:30AM *  4 points [-]

Restructuring since the fact that the universe hasn't been noticeably paperclipped can't possible be considered evidence for (c).

The universe has either been paperclipped (1) or it hasn't been (2).

If (1):

(A) we have observed paperclipping and not realized it (someone was really into stars, galaxies and dark matter)

(B) Our universe is the result of paperclipping (theists were right, sort of)

(C) Superintelligences tend not to optimize things that are astronomically visible.

If (2)

(D) Super-intelligences are impossible.

(E) Quantum immortality true.

(F) No intelligent aliens.

(G) Some variety of simulation hypothesis is true.

(H) Galactic aliens exist but have never constructed a super-intelligence do to a well enforced prohibition on AI construction/research, an evolved deficiency in thinking about minds as a physical object (substance dualism is far more difficult for them to avoid than it is for us), or some other reason that we can't fathom.

(I) Friendliness is easy + Alien ethics doesn't include any values that lead to us noticing them.

Comment author: Jack 02 December 2009 12:32:58PM 0 points [-]

Some of that was probably needed to contextualize my comment.

Comment author: Yorick_Newsome 02 December 2009 12:57:34PM 0 points [-]

I'll replace it without the spacing so it's more compact. Sorry about that, I'll work on my comment etiquette.

Comment author: wuwei 03 December 2009 04:14:43AM 2 points [-]

d) should be changed to the sparseness of intelligent aliens and limits to how fast even a superintelligence can extend its sphere of influence.

Comment author: bgrah449 02 December 2009 04:33:57PM 1 point [-]

I've read Newcomb's problem (Omega, two boxes, etc.), but I was wondering if, shortly, "Newcomb's problem is when someone reliably wins as a result of acting on wrong beliefs." Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?

Comment author: bgrah449 02 December 2009 05:44:44PM *  6 points [-]

I am completely baffled why this would be downvoted. I guess asking a question in genuine pursuit of knowledge, in an open thread, is wasting someone's time, or is offensive.

I like to think someone didn't have the time to write "No, that's not the case," and wished, before dashing off, leaving nothing but a silhouette of dust, that their cryptic, curt signal would be received as intended; that as they hurry down the underground tunnel past red, rotating spotlights, they hoped against hope that their seed of truth landed in fertile ground -- godspeed, downvote.

Comment author: bbnvnm 02 December 2009 06:35:29PM *  0 points [-]

I was wondering, shortly, "Is brgrah449 from Sicily?"

Comment author: Morendil 02 December 2009 06:50:10PM 0 points [-]

The community would benefit from a convention of "no downvotes in Open Thread".

However, I did find your question cryptic; you're dragging into a decision theory problem historical and religious referents that seem to have little to do with it. You need to say more if you really want an answer to the question.

Comment author: bgrah449 02 December 2009 07:31:15PM 1 point [-]

Sure, that's fair.

Peter walked on water out to Jesus because he thought he could; when he looked down and saw the sea, the fell in. As long as he believed Jesus instead of his experience with the sea, he could walk on water.

I don't think the Napoleon story is true, but that's beside the point. He thought he was so tough that an ordinary dose of poison wouldn't kill him, so he took six times the normal dosage. This much gave his system such a shock that the poison was rejected and he lived, thinking to himself, "Damn, I underestimated how incredibly fantastic I am." As long as he (wrongly) believed in his own exceptionalism instead of his experience with poison on other men, he was immune to the poison.

My train of thought was, you have a predictor and a chooser, but that's just getting you to a point where you choose either "trust the proposed worldview" or "trust my experience to date" - do you go for the option that your prior experience tells you shouldn't work (and hope your prior experience was wrong) or do you go with your prior experience (and hope the proposed worldview is wrong)?

I understand that in Newcomb's, that what Omega says is true. But change it up to "is true way more than 99% of the time but less than 100% of the time" and start working your way down that until you get to "is false way more than 99% of the time but less than 100% of the time" and at some point, not that long after you start, you get into situations very close to reality (I think, if I'm understanding it right).

This basically started from trying to think about who, or what, in real life takes on the Predictor role, who takes on the Belief-holder role, who takes on the Chooser role, and who receives the money, and seeing if anything familiar starts falling out if I spread those roles out to more than 2 people or shrink them down to a single person whose instincts implore them to do something against the conclusion to which their logical thought process is leading them.

Comment author: Morendil 02 December 2009 07:47:15PM -2 points [-]

You seem to be generalizing from fictional evidence, which is frowned upon here, and may explain the downvote (assuming people inferred the longer version from your initial question).

Comment author: bgrah449 02 December 2009 08:24:07PM 2 points [-]

That post (which was interesting and informative - thanks for the link) was about using stories as evidence for use in predicting the actual future, whereas my question is about whether these fictional stories are examples of a general conceptual framework. If I asked if Prisoner's Dilemma was a special case of Newcomb's, I don't think you'd say, "We don't like generalizing from fictional evidence."

Which leads, ironically, to the conclusion that my error was generalizing from evidence which wasn't sufficiently fictional.

Comment author: Morendil 02 December 2009 09:04:54PM 1 point [-]

Perhaps I jumped to conclusions. Downvotes aren't accompanied with explanations, and groping for one that might fit I happened to remember the linked post. More PC than supposing you were dinged just for a religious allusion. (The Peter reference at least required no further effort on my part to classify as fictional; I had to fact-check the Napoleon story, which was an annoyance.)

It still seems the stories you're evoking bear no close relation to Newcomb's as I understand it.

Comment author: gwern 03 December 2009 01:07:10AM 0 points [-]

fictional evidence

I have heard of real drugs & poisons which induce vomiting at high doses and so make it hard to kill oneself; but unfortunately I can't seem to remember any cases. (Except for one attempt to commit suicide using modafinil, which gave the woman so severe a headache she couldn't swallow any more; and apparently LSD has such a high LD-50 that you can't even hurt yourself before getting high.)

Comment author: gwern 06 December 2009 02:22:08AM 1 point [-]

I upvoted you because your second sentence painted a story that deeply amused me.

/pats seed of truth, and pours a little water on it

Comment author: Tyrrell_McAllister 02 December 2009 06:33:08PM *  3 points [-]

No, that's not the case. A one-boxer in Newcomb's problems is acting with entirely correct beliefs. All agree that the one-boxer will get more money than the two-boxer. That correct belief is what motivates the one-boxer.

The scenarios that you describe sound somewhat (but not exactly) like Gettier problems to me.

(I wasn't the downvoter.)

Comment author: PeerInfinity 02 December 2009 06:06:33PM 7 points [-]

(reposted from last month's open thread)

An interesting site I recently stumbled upon:

http://changingminds.org/

They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.

Here's the results from typing in "bias" into their search bar.

A quick search for "changingminds" in LW's search bar shows that noone has mentioned this site before on LW.

Is this site of any use to anyone here?

Comment author: Tom_Talbot 02 December 2009 08:54:35PM 2 points [-]

The conversion techniques page is fascinating. I'll put this to use good in further spreading the word of Bayes.

Comment author: Cyan 03 December 2009 12:52:25AM 4 points [-]

Henceforth, I am Dr. Cyan.

Comment author: gwern 03 December 2009 01:00:47AM 0 points [-]

Ah ha - So you were the last Cyan!

Comment author: Jack 03 December 2009 12:14:53PM 0 points [-]

I briefly thought this was a Battlestar Galactica pun.

Comment author: gwern 03 December 2009 01:58:15PM 0 points [-]

It was!

/me wonders what you then interpreted it as

Comment author: Jack 03 December 2009 03:26:46PM 0 points [-]

I was going back and forth between Zion and Cylon, lol.

Comment author: Eliezer_Yudkowsky 03 December 2009 03:46:42PM 2 points [-]

Congratulations! I guess people will believe everything you say now.

Comment author: Cyan 03 December 2009 04:10:20PM 1 point [-]

I certainly hope so!

Comment author: CannibalSmith 04 December 2009 02:26:04PM 0 points [-]

Wear a lab coat for extra credibility.

Comment author: Cyan 04 December 2009 03:16:35PM 3 points [-]

I was thinking I'd wear a stethoscope and announce, "Trust me! I'm a doctor! (sotto voce)... of philosophy."

Comment author: Aurini 05 December 2009 11:54:47PM 0 points [-]

Congrats! My friend recently got his Master's in History, and has been informing every telemarketer who calls that "Listen cupcake, it's not Dave - I'm not going to hang at your crib and drink forties; listen here, pal, I have my own office! Can you say that? To you I'm Masters Smith."

I certainly hope you wear your new title with a similar air of pretention, Doctor Cyan. :)

Comment author: Cyan 06 December 2009 12:12:39AM *  1 point [-]

I'll do my best!

Sincerely,
Cyan, Ph.D.

Comment author: gwern 11 December 2009 07:09:28PM 0 points [-]

Is 'Masters' actually a proper prefix (akin to the postfix Ph.D) for people with a Master's degree? I don't think I've ever seen that before.

Comment author: SilasBarta 03 December 2009 08:32:42PM 0 points [-]

With a doctorate in ...?

Comment author: Cyan 03 December 2009 08:38:20PM 2 points [-]

Biomedical engineering. My thesis concerned the analysis of proteomics data by Bayesian methods.

Comment author: SilasBarta 03 December 2009 11:03:16PM 0 points [-]

Isn't that what they normally use to analyze proteomics data? <\naive>

Comment author: Cyan 04 December 2009 12:16:41AM 1 point [-]

Not always, or even usually. It seems to me that by and large, scientists invent ad hoc methods for their particular problems, and that applies in proteomics as well as other fields.

Comment author: Daniel_Burfoot 07 December 2009 04:57:33AM 0 points [-]

Congratulations!

Why not post an introduction to your thesis research on LW?

Comment author: Cyan 07 December 2009 02:44:31PM 1 point [-]

Because I'd need to preface it with a small deluge of information about protein chemistry, liquid chromatography, and mass spectrometry. I think I'd irritate folks if I did that.

Comment author: JamesAndrix 03 December 2009 05:36:38AM 0 points [-]

Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence. Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)

Goals would not include building any detailed model of human preferences or intelligence.

I think we would find some general patterns that might also apply to more complex simulations.

Comment author: Nick_Tarleton 03 December 2009 06:03:05PM 0 points [-]
Comment author: Wei_Dai 03 December 2009 10:42:20AM 4 points [-]

This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I'd be interested to see more discussion of it.

Comment author: whpearson 03 December 2009 11:36:17AM *  2 points [-]

The problem with the specific scenario given, with experimental modification/duplication, rather than careful proof based modification, is that is liable to have the same problem that we have with creating systems this way. The copies might not do what the agent that created them want.

Which could lead to a splintering of the AI, and in-fighting over computational resources.

It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.

Comment author: Nick_Tarleton 03 December 2009 05:51:58PM 0 points [-]

It also makes the standard assumptions that AI will be implemented on and stable on the von Neumann style computing architecture.

Of course, if it's not, it could port itself to such if doing so is advantageous.

Comment author: whpearson 04 December 2009 12:40:17AM *  0 points [-]

Would you agree that one possible route to uFAI is human inspired?

Human inspired systems might have the same or similar high fallibility rate (from emulating neurons, or just random experimentation at some level) as humans and giving it access to its own machine code and low-level memory would not be a good idea. Most changes are likely to be bad.

So if an AI did manage to port its code, it would have to find some way of preventing/discouraging the copied AI in the x86 based arch from playing with the ultimate mind expanding/destroying drug that is machine code modification. This is what I meant about stability.

Comment author: [deleted] 06 December 2009 05:56:16AM *  0 points [-]

Er, I can't really give a better rebuttal than this: http://www.singinst.org/upload/LOGI//levels/code.html

Comment author: whpearson 06 December 2009 09:54:35AM 0 points [-]

What point are you rebutting?

Comment author: [deleted] 09 December 2009 03:06:43AM 0 points [-]

The idea that a greater portion of possible changes to a human-style mind are bad than changes of a equal magnitude to a Von Neumann-style mind.

Comment author: whpearson 09 December 2009 09:48:46AM 0 points [-]

Most random changes to a von Neumann-style mind would be bad as well.

Just a von-Neumann-style mind is unlikely to make the random mistakes that we can do, or at least that is Eliezer's contention.

Comment author: [deleted] 10 December 2009 04:07:16AM 0 points [-]

I can't wait until there are uploads around to make questions like this empirical.

Comment author: Johnicholas 03 December 2009 06:45:42PM *  0 points [-]

Let me point out that we (humanity) does actually have some experience with this scenario. Right now, mobile code that spreads across a network without effective controls on the bounds of its expansion by the author is worms. If we have experience, we should mine it for concrete predictions and countermeasures.

General techniques against worms might include: isolated networks, host diversity, rate-limiting, and traffic anomaly detection.

Are these low-cost/high-return existential reduction techniques?

Comment author: Vladimir_Nesov 03 December 2009 07:02:10PM *  0 points [-]

General techniques against worms might include: isolated networks, host diversity, rate-limiting, and traffic anomaly detection.

Are these low-cost/high-return existential reduction techniques?

I can't imagine having any return in protection against spreading of AI on the Internet at any cost (even in a perfect world, AI can still produce value, e.g. earn money online, and so buy access to more computing resources).

Comment author: Johnicholas 03 December 2009 07:27:23PM 1 point [-]

Your statement sounds a bit overgeneralized - but you probably have a point.

Still, would you indulge me in some idle speculation? Maybe there could be a species of aliens that evolved to intelligence by developing special microbe-infested organs (which would be firewalled somehow from the rest of the alien themselves) and incentivizing the microbial colonies somehow to solve problems for the host.

Maybe we humans evolved to intelligence that way - after all, we do have a lot of bacteria in our guts. But then, all the evidence that we have pointing to brains as information-processing center would have to be wrong. Maybe brains are the firewall organ! Memes are sortof like microbes, and they're pretty well "firewalled" (genetic engineering is a meme-complex that might break out of the jail).

The notion of creating an ecology of entities, and incentivizing them to produce things that we value, might be a reasonable strategy, one that we humans have been using for some time.

Comment author: Vladimir_Nesov 03 December 2009 07:51:59PM *  1 point [-]

I can't see how this comment relates to the previous one. It seems to start an entirely new conversation. Also, the metaphor with brains and microbes doesn't add understanding for me, I can only address the last paragraph, on its own.

The notion of creating an ecology of entities, and incentivizing them to produce things that we value, might be a reasonable strategy, one that we humans have been using for some time.

The crucial property of AIs making them a danger is (eventual) autonomy, not even rapid coming to power. Once the AI, or a society ("ecology") of AIs, becomes sufficiently powerful to ignore vanilla humans, its values can't be significantly influenced, and most of the future is going to be determined by those values. If those values are not good, from human values point of view, the future is lost to us, it has no goodness. The trick is to make sure that the values of such an autonomous entity are a very good match with our own, at some point where we still have a say in what they are.

Talk of "ecologies" of different agents creates an illusion of continuous control. The standard intuitive picture has little humans at the lower end with a network of gradually more powerful and/or different agents stretching out from them. But how much is really controlled by that node? Its power has no way of "amplifying" as you go through the network: if only humans and a few other agents share human values, these values will receive very little payoff. This is also not sustainable: over time, one should expect preference of agents with more power to gain in influence (which is what "more power" means).

The best way to win this race is to not create different-valued competitors that you don't expect being able to turn into your own almost-copies, which seems infeasible for all the scenarios I know of. FAI is exactly about devising such a copycat, and if you can show how to do that with "ecologies", all power to you, but I don't expect anything from this line of thought.

Comment author: Johnicholas 03 December 2009 10:04:58PM -1 points [-]

To explain the relation, you said: "I can't imagine having any return [...from this idea...] even in a perfect world, AI can still produce value, e.g. earn money online."

I was trying to suggest that in fact there might be a path to Friendliness by installing sufficient safeguards that the primary way a software entity could replicate or spread would be by providing value to humans.

Comment author: Vladimir_Nesov 04 December 2009 12:09:46AM *  3 points [-]

In the comment above, I explained why what AI does is irrelevant, as long as it's not guaranteed to actually have the right values: once it goes unchecked, it just reverts to whatever it actually prefers, be it in a flurry of hard takeoff or after a thousand years of close collaboration. "Safeguards", in every context I saw, refer to things that don't enforce values, only behavior, and that's not enough. Even the ideas for enforcement of behavior look infeasible, but the more important point is that even if we win this one, we still lose eventually with such an approach.

Comment author: Johnicholas 04 December 2009 03:12:41AM 0 points [-]

My symbiotic-ecology-of-software-tools scenario was not a serious proposal as the best strategy to Friendliness. I was trying to increase the plausibility of SOME return at SOME cost, even given that AIs could produce value.

I seem to have stepped onto a cached thought.

Comment author: Vladimir_Nesov 04 December 2009 03:16:34AM 1 point [-]

I'm afraid I see the issue as clear-cut, you can't get "some" return, you can only win or lose (probability of getting there is of course more amenable to small nudges).

Comment author: wedrifid 04 December 2009 04:23:28AM 0 points [-]

I seem to have stepped onto a cached thought.

Making such a statement significantly increases the standard of reasoning I expect from a post. That is, I expect you to be either right or at least a step ahead of the one with whom you are communicating.

Comment author: Wei_Dai 03 December 2009 11:46:53PM 6 points [-]

No, these are high-cost/low-return existential risk reduction techniques. Major corporations and governments already have very high incentive to protect their networks, but despite spending billions of dollars, they're still being frequently penetrated by human attackers, who are not even necessarily professionals. Not to mention the hundreds of millions of computers on the Internet that are unprotected because their owners have no idea how to do so, or they don't contain information that their owners consider especially valuable.

I got into cryptography partly because I thought it would help reduce the risk of a bad Singularity. But while cryptography turned out to work relatively well (against humans anyway), the rest of the field of computer security is in terrible shape, and I see little hope that the situation would improve substantially in the next few decades.

Comment author: Jonii 03 December 2009 04:10:08PM 1 point [-]

I'm interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don't really have anything like "values" beside "stay alive, get immediatly satisfying sensory input".

This, afaict, results to lip servive to "greater good", when people just select some nice values that they signal they want to promote, when in reality they haven't done the math by which these selected "values" derive from these "stay alive"-like values. And so, their actions seem irrational, but only because they signal of having values they don't actually have or care about.

This probably boils down to finding something to protect, but overall this issue is really confusing.

Comment author: Jack 03 December 2009 08:29:09PM 2 points [-]

I'm confused. Have you never seen long-term goal directed behavior?

Comment author: Jonii 03 December 2009 09:02:33PM 2 points [-]

I'm not sure, maybe, but more of a problem here is to select your goals. The choice seems to be arbitrary, and as far as I can tell, human psychology doesn't really even support having value systems that go deeper than that "lip service" + conformism state.

But I'm really confused when it comes to this, so I thought I could try describing my confusion here :)

Comment author: Jack 03 December 2009 09:52:57PM *  0 points [-]

I think you need to meet better humans. Or just read about some.

John Brown, Martin Luther, Galileo Galilei, Abraham Lincoln, Charles Darwin, Mohandas Gandhi

Can you make sense of those biographies without going deeper than "lip service" + conformism state?

Edit: And I don't even necessarily mean that these people were supremely altruistic or anything. You can add Adolph Hitler to the list too.

Comment author: Jonii 03 December 2009 10:19:38PM 4 points [-]

Can you make sense of those biographies without going deeper than "lip service" + conformism state?

Dunno, haven't read any of those. But if you're sure that something like that exists, I'd like to hear how is it achievable on human psychology.

I mean, paperclip maximizer is seriously ready to do anything to maximize paperclips. It really takes the paperclips seriously.

On the other hand, there are no humans that seem to care about anything in particular that's going on in the world. People are suffering and dying, misfortune happens, animals go extinct, and relatively few do anything about it. Many claim they're concerned and that they value human life and happiness, but if doing something requires going beyond the safe zone of conformism, people just don't do it. The best way I've figured out to overcome this is to manipulate that safe zone to allow more actions, but it would seem that many people think they know better. I just don't understand what.

Comment author: Jack 03 December 2009 11:09:49PM 0 points [-]

Dunno, haven't read any of those. But if you're sure that something like that exists, I'd like to hear how is it achievable on human psychology. Biographies as in the stories of their lives not as in books about those stories. Try wikipedia, these aren't obscure figures.

On the other hand, there are no humans that seem to care about anything in particular that's going on in the world.

This is way too strong, isn't it? I also don't think the reason a lot of people ignore these tragedies has as much to do with conformism as it does self-interest. People don't want to give up their vacation money. If anything there is social pressure in favor of sacrificing for moral causes. As for values, I think most people would say that the fact they don't do more is a flaw. "If I was a better person I would do x" or "Wow, I respect you so much for doing x" or "I should do x but I want y so much." I think it is fair to interpret these statements as second order desires that represent values.

Comment author: Jonii 03 December 2009 11:33:11PM 2 points [-]

Remember what I said about "lip service"?

If they want to care about stuff, that's kinda implying that they don't actually care about stuff (yet). Also, based on simple psychology, someone who chooses a spot in the conformist zone that requires giving lip service to something creates cognitive dissonance which easily produces second order desire to want what you claim you want. But what is frightening here is how this choice of values is arbitrary to the ultimate. If you'd judged another spot to be cheaper, you'd need to modify your values in a different way.

On both cases though, it seems that people really rarely move any bit towards actually caring about something.

Comment author: Jack 04 December 2009 12:08:08AM 0 points [-]

What is a conformist zone and why is it spotted?

Remember what I said about "lip service"?

Lip service is "Oh, what is happening in Darfur is so terrible!". That is different from "If I was a better person I'd help the people of Darfur" or "I'm such a bad person, I bought a t.v. instead of giving to charity". The first signals empathy the second and third signal laziness or selfishness (and honesty I guess).

If they want to care about stuff, that's kinda implying that they don't actually care about stuff (yet).

Why do values have to produce first order desires? For that matter, why can't they be socially constructed norms which people are rewarded for buying into? When people do have first order desires that match these values we name those people heroes. Actually sacrificing for moral causes doesn't get you ostracized it gets you canonized.

But what is frightening here is how this choice of values is arbitrary to the ultimate.

Not true. The range of values in the human community is quite limited.

On both cases though, it seems that people really rarely move any bit towards actually caring about something.

People are rarely complete altruists. But that doesn't mean that they don't care about anything. The world is full of broke artists who could pay for more food, drugs and sex with a real job. These people value art.

Comment author: Jonii 04 December 2009 11:18:58AM 0 points [-]

Lip service is "Oh, what is happening in Darfur is so terrible!". That is different from "If I was a better person I'd help the people of Darfur" or "I'm such a bad person, I bought a t.v. instead of giving to charity". The first signals empathy the second and third signal laziness or selfishness (and honesty I guess).

Both are hollow words anyway. Both imply that you care, when you really don't. There are no real actions.

Why do values have to produce first order desires?

Because, uhm, if you really value something, you'd probably want to do something? Not "want to want", or anything, but really care about that stuff which you value. Right?

For that matter, why can't they be socially constructed norms which people are rewarded for buying into?

Sure they can. I expressed this as safe zone manipulation, attempting to modify your envinroment so that your conformist behavior leads to working for some value.

The point here is that actually caring about something and working towards something due to arbitrary choice and social pressure are quite different things. Since you seem to advocate the latter, I'm assuming that we both agree that people rarely care about anything and most actions are result of social pressure and stuff not directly related to actually caring or valuing anything.

Which brings me back to my first point: Why does it seem that many people here actually care about the world? Like, as in paperclip maximizer cares about paperclips. Just optical illusion and conscious effort to appear as a rational agent valuing the world, or something else?

Comment author: Jonii 03 December 2009 11:11:58PM 1 point [-]

I could go on and state that I'm well aware that the world is complicated. It's difficult to estimate where our choices do lead us to, since net of causes and effects is complex and requires a lot of thinking to grasp. Heuristics human brain uses exist pretty much because of that. This means that it's difficult to figure out how to do something beside staying in the safe zone that you know to work at least somehow.

However, I still think there's something missing here. This just doesn't look like a world where people do particularly care about anything at all. Even if it was often useful to stay in a safe zone, there doesn't really seem to be any easy way to snap them out of it. No magic word, no violation of any sort of values makes anyone stand up and fight. I could literally tell people that millions are dying in vain(aging) or that the whole world is at stake(existential risks), and most people simply don't care.

At least, that's how I see it. I figure that rare exceptions to the rule there can be explained as a cost of signalling something, requirements of the spot in the conformist space you happen to be occupy or something like that.

I'm not particularly fond of this position, but I'm simply lacking a better alternative.

Comment author: Jonii 07 December 2009 01:42:11PM 0 points [-]

So, I've been thinking about this for some time now, and here's what I've got:

First, the point here is to self-reflect to want what you really want. This presumably converges to some specific set of first degree desires for each one of us. However, now I'm a bit lost on what do we call "values", are they the set of first degree desires we have(not?), set of first degree desires we would reach after infinity of self-reflection, or set of first degree desires we know we want to have at any given time?

As far as I can tell, akrasia would be a subproblem of this.

So, this should be about right. However, I think it's weird that here people talk a bit about akrasia, and how to achieve those n-degree desires, but I haven't seen anything about actually reflecting and updating what you want. Seems to me that people trust a tiny bit too much to the power of cognitive dissonance fixing the problem between wanting to want and actually wanting, this resulting to the lack of actual desire in achieving what you know you should want(akrasia).

I really dunno how to overcome this, but this gap seems worth discussing.

Also, since we need eternity of self-reflection to reach what we really want, this looks kinda bad for FAI: Figuring out where our self-reflection would converge in infinity seems pretty much impossible to compute, and so, we're left with compromises that can and probably will eventually lead to something we really don't want.

Comment author: Matt_Simpson 03 December 2009 08:44:10PM 1 point [-]

Is there a proof anywhere that occam's razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don't (immediately) see how to get from here to an actual number that you can plug into Baye's rule. Is this just something that is buried in textbook on information theory?

On that note, assuming someone had a strong background in statistics (phd level) and little to no background in computer science outside of a stat computing course or two, how much computer science/other fields would they have to learn to be able to learn information theory?

Thanks to anyone who bites

Comment author: Zack_M_Davis 03 December 2009 09:31:43PM 1 point [-]

I found Rob Zhara's comment helpful.

Comment author: Matt_Simpson 06 December 2009 07:02:13AM 0 points [-]

thanks. I suppose a mathematical proof doesn't exist, then.

Comment author: wedrifid 03 December 2009 10:05:53PM 0 points [-]

actual number that you can plug into Baye's rule

Bayes' rule.

Comment author: timtyler 09 December 2009 06:15:51AM 0 points [-]

Occam's razor is dependent on a descriptive language / complexity metric (so there are multiple flavours of the razor).

Unless a complexity metric is specified, the first question seems rather vague.

Comment author: Zack_M_Davis 04 December 2009 01:33:47AM 1 point [-]

With Channukah right around the corner, it occurs to me that "Light One Candle" becomes a transhumanist/existential-risk-reduction song with just a few line edits.

Light one candle for all humankind's children
With thanks that their light didn't die
Light one candle for the pain they endured
When the end of their world was surmised
Light one candle for the terrible sacrifice
Justice and freedom demand
But light one candle for the wisdom to know
When the singleton's time is at hand

Don't let the light go out! (&c.)

Comment author: Jordan 04 December 2009 02:34:37AM 3 points [-]

Whether or not a singleton is the best outcome or not, I'm way too uncomfortable with the idea to be singing songs of praise about it.

Comment author: Zack_M_Davis 04 December 2009 03:13:20AM 1 point [-]

Upvoted. I'm actually really uncomfortable with the idea, too. My comment above is meant in a silly and irreverent manner (cf. last month), and the substitution for "peacemaker" was too obvious to pass up.

Comment author: JamesAndrix 04 December 2009 07:11:16AM 2 points [-]

http://scicom.ucsc.edu/SciNotes/0901/pages/geeks/geeks.html

" They told them that half the test generally showed gender differences (though they didn't mention which gender it favored), and the other half didn't.

Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind."

Comment author: CronoDAS 05 December 2009 06:45:51AM 2 points [-]

I'm looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: "You can't really be sure evolution is true until you've listened to a creationist for five minutes."

Ah, never mind, I found it.

"In a way, no one can really trust the theory of natural selection until after they have listened to creationists for five minutes; and then they know it's solid."

I'd like a pithier way of phrasing it, though, than the original quote.

Comment deleted 05 December 2009 01:00:21PM [-]
Comment author: wedrifid 05 December 2009 01:18:09PM 1 point [-]

I strongly suspect that lesswrong.com has an ideological bias in favor of "morality."

It does.

There has been no proof that rationality requires morality. Yet I suspect that posts coming from a position of moral nihilism would not be welcomed.

In general they are not. But I find that a high quality clearly rational reply that doesn't adopt the politically correct morality will hover around 0 instead of (say) 8. You can then post a couple of quotes to get your karma fix if desired.

Comment author: Mario 05 December 2009 08:33:14PM 7 points [-]

I'm looking for a particular fallacy or bias that I can't find on any list.

Specifically, this is when people say "one more can't hurt;" like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can't find a name. I would expect it to be called the "Lost Cause Fallacy" or the "Fallacy of Futility" or something, but neither seems to be recognized anywhere. Does this have a standard name that I don't know, or is it so obvious that no one ever bothered to name it?

Comment author: [deleted] 07 December 2009 05:43:06AM 3 points [-]

Just thought I'd mention this: as a child, I detested praise. (I'm guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it's affected my overall development.

Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.

Comment author: Zack_M_Davis 07 December 2009 05:47:45AM 4 points [-]

Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence

I love this community.

Comment author: [deleted] 07 December 2009 06:00:40AM 1 point [-]

Can I interpret that as an invitation to send you a friend request on Facebook? >.>

Comment author: Zack_M_Davis 07 December 2009 06:04:57AM 1 point [-]

Um, sure?

Comment author: anonym 07 December 2009 09:31:00AM 1 point [-]

Fascinating. As a child, I also detested praise, and I have always had something bordering on an obsession for symmetry and an aversion to asymmetry.

I hadn't heard of the Thue-Morse sequence until now, but it is quite similar to a sequence I came up with as a child and have tapped out (0 for left hand/foot/leg, 1 for right hand/foot/leg) or silently hummed (or just thought) whenever I was bored or was nervous.

My sequence is:

[0, 1, 1, 0] [1001, 0110, 0110, 1001] [0110 1001 1001 0110, 1001 0110 0110 1001, 1001 0110 0110 1001, 0110 1001 1001 0110] ...

[commas and brackets added to make the pattern obvious]

As a kid, I would routinely get the pattern up into the thousands as I passed the time imagining sounds or lights very quickly going off on either the left (0) or right (1) side.

Comment author: [deleted] 07 December 2009 09:10:40PM 2 points [-]

Every finite subsequence of your sequence is also a subsequence of the Thue-Morse sequence and vice versa. So in a sense, each is a shifted version of the other; it's just that they're shifted infinitely much in a way that's difficult to define.

Comment author: AdeleneDawner 07 December 2009 12:02:27PM 0 points [-]

Just thought I'd mention this: as a child, I detested praise.

++MeToo;

Comment author: Yorick_Newsome 08 December 2009 04:39:32AM 0 points [-]

I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.

Comment author: Vladimir_Nesov 08 December 2009 11:16:13PM *  3 points [-]

David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.

56% of target faculty responding favor (i.e. accept or lean toward) physicalism, while 27% favor nonphysicalism (for respondents as a whole, the figure is 54:29). A priori knowledge is favored by 71-18%, an analytic-synthetic distinction is favored by 65-27%, Millianism is favored over Fregeanism by 34-29%, while the view that zombies are conceivable but not metaphysically possible is favored over metaphysical possibility and conceivability respectively by 35-23-16% respectively.

Comment author: Eliezer_Yudkowsky 11 December 2009 07:55:24AM 4 points [-]

Tags now sort chronologically oldest-to-newest by default, making them much more useful for reading posts in order.

Comment author: Johnicholas 11 December 2009 08:16:05PM *  2 points [-]

Ivan Sutherland (inventor of Sketchpad - the first computer-aided drawing program) wrote about how "courage" feels, internally, when doing research or technological projects.

"[...] When I get bogged down in a project, the failure of my courage to go on never feels to me like a failure of courage, but always feels like something entirely dif- ferent. One such feeling is that my research isn't going anywhere anyhow, it isn't that important. Another feeling involves the urgency of something else. I have come to recognize these feelings as “who cares” and “the urgent drives out the important.” [...]"

Comment author: Cyan 13 December 2009 06:38:19PM *  2 points [-]

In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes' Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an approximation to the infeasible but correct procedure. So, what is the justification for taking the mean on the log-odds scale? Is there a better but still feasible procedure?

Comment author: Douglas_Knight 13 December 2009 07:43:25PM 1 point [-]

An independent piece of evidence moves the log-odds a constant additive amount regardless of the prior. Averaging log-odds amounts to moving 2/3 of that distance if 2/3 of the people have the particular piece of evidence. It may behave badly if the evidence is not independent, but if all you have are posteriors, I think it's the best you can do.

Comment author: whpearson 14 December 2009 10:59:19AM 0 points [-]

Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.

Comment author: wedrifid 14 December 2009 11:15:05AM 0 points [-]

Are people interested in discussing bounded memory rationality?

Sure. What about it in particular? Care to post some insights?

Comment author: whpearson 14 December 2009 04:17:13PM 0 points [-]

I was thinking about starting with very simple agents. Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like "Would we view it as having a goal/decision theory?" If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.

Comment author: wedrifid 17 December 2009 12:41:56PM 0 points [-]

Things like 1 input, 1 output with 1 bit of memory and looking at them from a decision theory point of view. Asking questions like "Would we view it as having a goal/decision theory?"

I tend to let other people draw those lines up. It just seems like defining words and doesn't tend to spark my interest.

If not what is the minimal agent that we would and does it make any trade offs for having a decision theory module in terms of the complexity of the function it can represent.

I would be interested to see where you went with your answer to that one.

Comment author: whpearson 17 December 2009 12:28:27PM 0 points [-]

Would my other reply to you be an interesting/valid way of thinking about the problem. If not what were you looking for?

Comment author: wedrifid 17 December 2009 12:50:54PM 0 points [-]

Would my other reply to you be an interesting/valid way of thinking about the problem. If not what were you looking for?

Pardon me. Missed the reply. Yes, I'd certainly engage with that subject if you fleshed it out a bit.

Comment author: Johnicholas 15 December 2009 04:44:35PM 1 point [-]

Repost from Bruce Schneier's CRYPTO-GRAM:

The Psychology of Being Scammed

This is a very interesting paper: "Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson." Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There's no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.

The paper describes a dozen different con scenarios -- entertaining in itself -- and then lists and explains six general psychological principles that con artists use:

  1. The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won't notice.

  2. The social compliance principle. Society trains people not to question authority. Hustlers exploit this "suspension of suspiciousness" to make you do what they want.

  3. The herd principle. Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they're all conspiring against you.

  4. The dishonesty principle. Anything illegal you do will be used against you by the fraudster, making it harder for you to seek help once you realize you've been had.

  5. The deception principle. Things and people are not what they seem. Hustlers know how to manipulate you to make you believe that they are.

  6. The need and greed principle. Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.

It all makes for very good reading.

The paper The Real Hustle

Comment author: Wei_Dai 17 December 2009 12:11:46PM 7 points [-]

What are the implications to FAI theory of Robin's claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with "status" as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?

Comment author: EStokes 18 December 2009 02:43:54PM 3 points [-]

I think it's kinda like inclusive genetic fitness: It's the reason you do things, but you're (usually) not conciously striving for an increased amount of it. So I don't think it could be called a terminal value, as such...

Comment author: Wei_Dai 20 December 2009 08:49:13AM 3 points [-]

I had thought of that, but, if you consider a typical human mind as a whole instead of just the conscious part, it seems clear that it is striving for increased status. The same cannot be said for inclusive fitness, or at least the number of people who do not care about having higher status seems much lower than the number of people who do not care about having more offspring.

I think one of Robin's ideas is that unconscious preferences, not just conscious ones, should matter in ethical considerations. Even if you disagrees with that, how do you tell an FAI how to distinguish between conscious preferences and unconscious ones?

Comment author: wedrifid 18 December 2009 02:59:15PM 3 points [-]

What are the implications to FAI theory of Robin's claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with "status" as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?

If it went that far it would also go the next step. It would end up with "getting laid".

Comment author: MrHen 17 December 2009 05:26:09PM 1 point [-]

It has been awhile since I have been around, so please ignore if this has been brought up before.

I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.

Comment author: Alicorn 17 December 2009 05:40:10PM 0 points [-]

Firefox (or maybe one of the million extensions that I've downloaded and forgotten about) has a feature where, if you mouseover a link, the URL linked to will appear in the lower bar of the window. A different color would be easier, though.

Comment author: MrHen 18 December 2009 02:37:56PM 2 points [-]

I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:

If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn't the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?

Would it matter if we dropped "undetectable" from the proposed simulation? At what point would it begin to matter?

Comment author: wedrifid 18 December 2009 02:41:54PM *  -3 points [-]

If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing?

Not particularly.

Wouldn't the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?

Absolutely.

Would it matter if we dropped "undetectable" from the proposed simulation? At what point would it begin to matter?

The point where you find a way to hack it, escape from the simulation and take over (their) world.

Comment author: whpearson 19 December 2009 11:58:08AM 0 points [-]

Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.

Comment author: wedrifid 19 December 2009 01:28:29PM 1 point [-]

The manner of status obsession that Robin Hanson finds all around him is definitely due to the fact that we live in a part of a world where our immediate needs are easily met. Particularly if you are considering signalling.

I think you are probably right in general too. Although a lot of the status obsession remains even in resources scarce environments, it is less about signalling your ability to conspicuously consume or do irrational costly things. It's more being obsessed with having enough status that the other tribe members don't kill you to take your food (for example).

Comment author: timtyler 19 December 2009 09:38:46PM 1 point [-]
Comment author: Zack_M_Davis 20 December 2009 11:20:05AM 6 points [-]

Does anyone here think they're particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall---it all just feels like I chose to do what I did out of my magical free will. Which doesn't explain anything.

If you know what you want, and then you choose actions that will help you get it, then that's simple enough to analyze: you're just rational, that's all. But when you would swear with all your heart that you want some simple thing, but are continually breaking down and acting dysfunctionally---well, clearly something has gone horribly wrong with your brain, and you should figure out the problem and fix it. But if you can't tell what's wrong because your decision algorithm is utterly opaque, then what do you do?

Comment author: wedrifid 20 December 2009 02:29:26PM 5 points [-]

Does anyone here think they're particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall---it all just feels like I chose to do what I did out of my magical free will. Which doesn't explain anything.

My suggestion is focussing your introspection on working out what you really want. That is, keep investigating what you really want until such time as the phrase 'me behaving poorly' and 'being good' sound like something that is in a foreign language, that you can understand only by translating.

You may be thinking "clearly something has gone horribly wrong with my brain" but your brain is thinking "Something is clearly wrong with my consciousness. It is trying to make me do all this crazy shit. Like the sort of stuff we're supposed to pretend we want because that is what people 'Should' want. Consciousnesses are the kind of things that go around believing in God and sexual fidelity. That's why I'm in charge, not him. But now he's thinking he's clever and is going to find ways to manipulate me into compliance. F@#@ that s#!$. Who does he think he is?"

When trying to work effectively with people empathy is critical. You need to be able to understand what they want and be able to work with each other for mutual benefit. Use the same principle with yourself. Once your brain believes you actually know what it (ie. you) want and are on approximately the same page it may well start trusting you and not feel obliged to thwart your influence. Then you can find a compromise that allows you to get that 'simple thing' you want without your instincts feeling that some other priority has been threatened.

Comment author: whpearson 20 December 2009 02:39:37PM *  3 points [-]

I tend to think of my brain as a thing with certain needs. Companionship, recognition, physical contact, novelty, etc. Activities that provide these tend to persist. Figure out what your dysfunctional actions provide you in terms of your needs. Then try and find activities that provide these but aren't so bad and try and replace the dysfunctional bits. Also change the situation you are in so that the dysfunctional default actions don't automatically trigger.

My dream is to find a group of like minded people that I can socialise and work with. SIAI is very tempting in that regard.

Comment author: Alicorn 20 December 2009 03:37:09PM 3 points [-]

People who watch me talking about myself sometimes say I'm good at introspection, but I think about half of what I do is making up superstitions so I have something doable to trick myself into making some other thing, previously undoable, doable. ("Clearly, the only reason I haven't written my paper is that I haven't had a glass of hot chocolate, when I'm cold and thirsty and want refined sugar." Then I go get a cup of cocoa. Then I write my paper. I have to wrap up the need for cocoa in a fair amount of pseudoscience for this to work.) This is very effective at mood maintenance for me - I was on antidepressants and in therapy for a decade as a child, and quit both cold turkey in favor of methods like this and am fine - but I don't know which (if, heck, any) of my conclusions that I come to this way are "really true" (that is, if the hot chocolate is a placebo or not). They're just things that pop into my head when I think about what my brain might need from me before it will give back in the form of behaving itself.

You have to take care of you brain for it to be able to take care of you. If it won't tell you what it wants, you have to guess. (Or have your iron levels checked :P)

Comment author: byrnema 22 December 2009 10:28:52PM *  1 point [-]

I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?

Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?

Comment author: Jack 26 December 2009 12:02:51AM 8 points [-]

Sharing my Christmas (totally non-supernatural) miracle:

My theist girlfriend on Christmas Eve: "For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like 'oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.' I almost laughed."

Comment author: Eliezer_Yudkowsky 26 December 2009 12:12:19AM 2 points [-]

This made me say "Awwwwwwww..."

Did LW play a part, or was she just browsing the Internet?

Comment author: Jack 26 December 2009 12:45:23AM 2 points [-]

Well I think I was the first vocal atheist she had ever met so arguments with me and me making fun of superstition while not being a bad person were probably crucial. Some Less Wrong stuff probably got to her through me, though. I should find something to introduce the site to her, though I doubt she would ever spend a lot of time here.

Comment author: Psy-Kosh 27 December 2009 03:42:14PM 3 points [-]

If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I'm working through Pearl's Causality and am having trouble deriving something... or say I've stared at the wikipedia pages for ages and STILL don't get the difference between Minimum Description Length and Minimum Message Length... is LW an appropriate place to go "please help me understand this", and if so, should I request it in a top level post or in an open thread or...

More generally: LW is about developing human rationality, but is it appropriate for questions about already solved aspects of rationality? like "please help me understand the math for this aspect of reasoning" or even "I'm currently facing this question in my life or such, help me reason through this please?"

Thanks.

Comment author: DanArmak 27 December 2009 03:46:51PM 2 points [-]

More generally: LW is about developing human rationality, but is it appropriate for questions about already solved aspects of rationality?

Most posts here are written by someone who understands an aspect of rationality, to explain it to those who don't. I see no reason not to ask questions in the open thread. I think they should be top-level posts only if you anticipate a productive discussion around them; most already-solved questions can be answered with a single comment and that would be that, so no need for a separate post.

Comment author: Kaj_Sotala 27 December 2009 03:57:56PM 1 point [-]

I think that kind of a question is fine in the Open Thread.