Open Thread, August 2010

4 Post author: NancyLebovitz 01 August 2010 01:27PM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (676)

Comment author: NancyLebovitz 01 August 2010 02:13:33PM 12 points [-]

Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.

Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.

Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.

The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.

This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.

Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.

As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.

From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.

Comment author: Rain 01 August 2010 02:28:57PM 6 points [-]

End of life regulation is one reason cryonics is suffering, as well: without the ability to ensure preservation when the brain is still relatively healthy, the chances diminish significantly. I think it'd be interesting to see cryonics organizations put field offices in countries or states with legal suicide laws. Here's a Frontline special on suicide tourists.

Comment author: daedalus2u 01 August 2010 03:49:36PM 3 points [-]

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

Comment author: kmeme 01 August 2010 06:28:10PM *  2 points [-]

I would like feedback on my recent blog post:

http://www.kmeme.com/2010/07/singularity-is-always-steep.html

It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.

Instead I now believe in many cases the log plot is closer to "the real thing" or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.

Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It's not building to some dramatic peak.

None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.

Comment author: JamesAndrix 01 August 2010 07:57:49PM 1 point [-]

This is easier to say when you're near the top of the current curve.

It doesn't affect me much that my computer can't handle hi-def youtube, because I'm just a couple of doubling times behind the state of the art.

But if you were using a computer ten doubling times back, you'd have trouble even just reading lesswrong. Even if you overcame the format and software issues, we'd be trading funny cat videos that are bigger than all your storage. You'd get nothing without a helper god to downsample them.

When the singularity approaches, the doubling time will decrease, for some people. Maybe not for all.

Maybe will will /feel/ like a linear increase in utility for the people who's abilities are being increased right along. For people who are 10 doublings behind and still falling, it will be obvious something is different..

Comment author: kmeme 01 August 2010 11:29:55PM 1 point [-]

Consider $/MIPS available in the mainstream open market. The doubling time of this can't go down "for some people", it can only go down globally. Will this doubling time decrease leading up to the Singularity? Or during it?

I always felt that's what the Singularity was, an acceleration of Moore's Law type progress. But I wrote the post because I think it's easy to see a linear plot of exponential growth and say "look there, it's shooting through the roof, that will be crazy!". But in fact it won't be any crazier than progress is today.

It will require a new growth term, machine intelligence kicking in for example, to actually feel like things are accelerating.

Comment author: JamesAndrix 02 August 2010 04:39:13AM 1 point [-]

It could if, for example, it were only available in large chunks. If you have $50 today you can't get the $/MIPS of a $5000 server. You could maybe rent the time, but that requires a high level of knowledge, existing internet access at some level, and an application that is still meaningful on a remote basis.

The first augmentation technology that requires surgery will impose a different kind of 'cost'. and will spread unevenly even among people who have the money.

It's also important to note that an increase in doubling time would show up as a /bend/ in a log scale graph, not a straight line.

Comment author: kmeme 02 August 2010 12:37:42PM 2 points [-]

Yes Kurzweil does show a bend in the real data in several cases. I did not try to duplicate that in my plots, I just did straight doubling every year.

I think any bending in the log scale plot could be fairly called acceleration.

But just the doubling itself, while it leads to ever-increases step sizes, is not acceleration. In the case of computer performance it seems clear exponential growth of power produces only linear growth in utility.

I feel this point is not made clear in all contexts. In presentations I felt some of the linear scale graphs were used to "hype" the idea that everything was speeding up dramatically. I think only the bend points to a "speeding up".

Comment author: Unknowns 02 August 2010 06:43:05AM 0 points [-]

I agree with your post, especially since I expect to win my bet with Eliezer.

Comment author: sketerpot 02 August 2010 07:13:29AM *  1 point [-]

I don't know what this bet is, and I don't see a link anywhere in your post.

Comment author: Unknowns 02 August 2010 07:36:17AM *  0 points [-]

http://wiki.lesswrong.com/wiki/Bets_registry

(I am the original Unknown but I had to change my name when we moved from Overcoming Bias to Less Wrong because I don't know how to access the other account.)

Comment author: gwern 02 August 2010 07:47:23AM *  1 point [-]

Any chance you and Eliezer could set a date on your bet? I'd like to import the 3 open bets to Prediction Book, but I need a specific date. (PB, rightly, doesn't do open-ended predictions.)

eg. perhaps 2100, well after many Singularitarians expect some sort of AI, and also well after both of your actuarial death dates.

Comment author: Unknowns 02 August 2010 10:18:40AM 0 points [-]

If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)

Comment author: gwern 02 August 2010 11:13:57AM *  2 points [-]

You could either

  1. donate the money to charity under the view 'and you're both wrong, so there!'
  2. say that the prediction is implicitly a big AND - 'there will be an AI by 2100 AND said first AI will not have... etc.', and that the conditions allow 'short-circuiting' when any AI is created; with this change, reaching 2100 is a loss on your part.
  3. Like #2, but the loss is on Eliezer's part (the bet changes to 'I think there won't be an AI by 2100, but if there is, it won't be Friendly and etc.')

I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.

Comment author: Unknowns 02 August 2010 07:58:55PM 1 point [-]

Eliezer and I are probably about equally confident that "there will not be AI by 2100, and both Eliezer and Unknown will still be alive" is incorrect. So it doesn't seem very fair to select either 2 or 3. So option 1 seems better.

Comment author: NihilCredo 02 August 2010 07:07:13PM 1 point [-]

Did you notice that, as phrased in the link, your bet is about the following event: "[at a certain point in time under a few conditions] it will be interesting to hear Eliezer's excuses"? Technically, all Eliezer will have to do to win the bet will be to write a boring excuse.

Comment author: Unknowns 02 August 2010 07:13:39PM 1 point [-]

Eliezer was the one who linked to that: the bet is about whether those conditions will be satisfied.

Anyway, he has already promised (more or less) not to make excuses if I win.

Comment author: humpolec 01 August 2010 07:04:50PM 2 points [-]

If you have many different (and conflicting, in that they demand undivided attention) interests: if it was possible, would copying yourself in order to pursue them more efficiently satisfy you?

One copy gets to learn drawing, another one immerses itself in mathematics & physics, etc. In time, they can grow very different.

(Is this scenario much different to you than simply having children?)

Comment author: Peter_de_Blanc 01 August 2010 07:22:51PM 4 points [-]

That sounds (to me) better than having children, but not as good as living longer.

Comment author: DanArmak 01 August 2010 07:44:11PM 1 point [-]

Copying has at best zero utility (as regards interests): each copy only indulges in one interest, and I anticipate being only one copy, even if I don't know in advance which one.

How is having children at all similar? 1) children would have different interests; 2) I cannot control (precommit) future children; 3) raising children would be for me a huge negative utility - both emotionally and resource-wise.

Comment author: Jordan 01 August 2010 08:54:04PM *  6 points [-]

Copying has at best zero utility (as regards interests)

This is not true for me. I care about my ideas beyond my own desire to implement them. If I knew there was a passionate and capable person willing to take over some of my ideas (which I'd otherwise not have time for), I'd jump on the opportunity.

Doubly so if the other person was a copy of me, in which case I'd not only have a guarantee on competence, but assurance that the person would be able to relate the story and product to me afterwards (and possibly share the profit).

Comment author: ShardPhoenix 02 August 2010 11:48:34AM *  0 points [-]

Doubly so if the other person was a copy of me, in which case I'd not only have a guarantee on competence, but assurance that the person would be able to relate the story and product to me afterwards (and possibly share the profit).

Interestingly, now that you bring this up, I'm not at all certain that I'd be able to communicate especially effectively with a copy of myself. Probably better than with a randomly selected person, but perhaps not as well as I might hope.

Comment author: JoshuaZ 02 August 2010 01:01:13PM 1 point [-]

Interestingly, now that you bring this up, I'm not at all certain that I'd be able to communicate especially effectively with a copy of myself. Probably better than with a randomly selected person, but perhaps not as well as I might hope.

What makes you reach that conclusion?

Comment author: Jordan 02 August 2010 07:16:41PM 0 points [-]

I think communication would start out good and become amazing over time. I don't communicate with myself completely in English, there's a lot of thoughts that go through unencoded. Having a copy of myself to talk to would force us to encode those raw thoughts as best as possible. This isn't necessarily easy but I think the really difficulty part would already be behind us, namely having the same core thoughts.

Comment author: humpolec 01 August 2010 09:40:31PM 2 points [-]

How is having children at all similar?

I think people can feel a sense of accomplishment when their child achieved something they wanted but never got around to.

Comment author: red75 01 August 2010 07:56:57PM 1 point [-]

Waste of processing power. Having dozens of focuses of attention and corresponding body/brain construction is more efficient.

Comment author: Nisan 01 August 2010 09:26:36PM 0 points [-]

What's the difference between a copy of yourself and an extra "body/brain construction"?

Comment author: humpolec 01 August 2010 09:38:22PM 0 points [-]

I think red75 meant rebuilding yourself into a more "multi-threaded" being. I'm not sure I would want to go in that direction, though - it's hard to imagine what the result would feel like, it probably couldn't even be called conscious in the human sense, but somehow multiply-conscious...

Comment author: red75 02 August 2010 02:10:03PM 0 points [-]

Yes, something like that. But I don't think that consciousness of such being will be dramatically different, because it still should contain "central executive" that still coordinates overall behavior of that being and still controls direction and distribution of attention that is however much more fine-grained than human's one.

Comment author: KrisC 01 August 2010 09:45:10PM *  0 points [-]

Waste of processing power.

Because basic functions are being repeated?

Comment author: red75 02 August 2010 01:03:12PM 0 points [-]

I rather say higher level functions is excessively redundant. Then there are coordination problems, competition for shared resources (e.g. money, sexual partner), possibly divergence of near- and far-term goals, relatively low in-group communication speed, possibly less number of cross-domain-of-knowledge insights.

Comment author: Leonhart 02 August 2010 08:58:45PM 1 point [-]

sexual partner

Surely you jest.

Comment author: [deleted] 01 August 2010 09:10:49PM 6 points [-]

I wouldn't have problems copying myself as long as I could merge the copies afterwards. However, it might not be possible to have a merge operation for human level systems that both preserves information and preserves sanity. E.g. if one copy started studying philosophy and radically changed its world views from the original, how do you merge this copy back into the original without losing information?

Comment author: NancyLebovitz 01 August 2010 09:29:15PM 1 point [-]

Tentatively-- there's be a central uberperson which wouldn't be that much like a single human being.

If I had reason to think it was safe, I'd really like to live that way.

Comment author: humpolec 01 August 2010 09:32:23PM 2 points [-]

I agree, I don't think merge is possible in this scenario. I still see some gains, though (especially when communication is possible):

  • I (the copy that does X) am happy because I do what I wanted.
  • I (the other copies) am happy because I partly identify with the other copy (as I would be proud of my child/student?)
  • I (all copies) get results I wanted (research, creative, or even personal insights if the first copy is able to communicate them)
Comment author: [deleted] 01 August 2010 10:45:17PM 2 points [-]

If you don't have the ability to merge, would the copies get equal rights as the original? Or would the original control all the resources and the copies get treated as second class citizens? If the copies were second class citizens, I would probably not fork because this would result in slavery.

If the copies do get equal rights, how do you plan to allocate resources that you had before forking such as wealth and friends? If I split the wealth down the middle, I would probably be OK with the lack of merging. However, I'm not sure how I would divide up social relationships between the copy and the original. If both the original and the copy had to reduce their financial and social capital by half, this might have a net negative utility.

If the goal is to just learn a new skill such as drawing, a more efficient solution might involving uploading yourself without copying yourself and then running the upload faster than realtime. I.e. the upload thinks it has spent a year learning a new skill but only a day has gone by in the real world. However, this trick won't work if the goal involves interacting with others unless they are also willing to run faster than realtime.

Comment author: rwallace 01 August 2010 11:17:16PM 0 points [-]

Do what e.g. Mercurial does: report that the copies are too different for automatic merge, and punt the problem back to the user.

In other words, you are right that there is no solution in the general case, but that should not necessarily deter us from looking for a solution that works in 90% of cases.

Comment author: JenniferRM 02 August 2010 04:03:02AM 3 points [-]

David Brin's novel Kiln People has this "merging back" idea, with cheap copies, using clay for a lot of the material and running on a hydrogen based metabolism so they are very short lived (hours to weeks, depending on $$) and have to merge back relatively soon in order to keep continuity of consciousness through their long lived original. Lots of fascinating practical economic, ethical, social, military, and political details are explored while a noir detective story happens in the foreground.

I recommend it :-)

Comment author: KrisC 01 August 2010 09:44:26PM 2 points [-]

Sounds wonderful. Divide and conquer.

As this sounds like a computer assisted scenario, I would like the ability to append memories while sleeping. Wake up and have access to the memories of the copy. This would not necessarily include full proficiency as I suspect that muscle memory may not get copied.

Comment author: SilasBarta 01 August 2010 07:25:02PM *  6 points [-]

I thought I'd pose an informal poll, possibly to become a top-level, in preparation for my article about How to Explain.

The question: on all the topics you consider yourself an "expert" or "very knowledgeable about", do you believe you understand them at least at Level 2? That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent layperson, one-on-one, to understand complex topics in your expertise, teaching them every intermediate topic necessary for grounding the hardest level?

Edit: Per DanArmak's query, anything you can re-derive or infer from your present knowledge counts as part of your present knowledge for purposes of answering this question.

I'll save my answer for later -- though I suspect many of you already know it!

Comment author: DanArmak 01 August 2010 07:48:51PM 3 points [-]

using only your present knowledge

This strikes me as an un-lifelike assumption. If I had to explain things in this way, I would expect to encounter some things that I don't explicitly know (and other that I knew and have forgotten), and to have to (re)derive them. But I expect that I would be able to rederive almost all of them.

Refining my own understanding is a natural part of building a complex explanation-story to tell to others, and will happen unless I've already built this precise story before and remember it.

Comment author: SilasBarta 01 August 2010 07:53:08PM 3 points [-]

For purposes of this question, things you can rederive from your present knowledge count as part of your present knowledge.

Comment author: KrisC 01 August 2010 07:56:40PM 0 points [-]

Can you teach a talented, untrained person a skill so that they exceed your own ability? Can you then identify why they are superior? If you have deep level knowledge of your area of expertise that you can impart to others, you ought to be able to evaluate and train a replacement based on "raw talent."

Considering that intellectual or artistic endeavors may have a variety of details hidden even from the expert, perhaps a clearer example may be found in sports coaches.

Comment author: pjeby 01 August 2010 08:16:15PM 3 points [-]

Perhaps a clearer example may be found in sports coaches.

The main reason that coaches are important (not just in sports) is because of blind spots - i.e., things that are outside of a person's direct perceptual awareness.

Think of the Dunning-Kreuger effect: if you can't perceive it, you can't improve it.

(This is also why publications have editors; if a writer could perceive the errors in their work, they could fix them themselves.)

Comment author: fiddlemath 01 August 2010 07:57:27PM 2 points [-]

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

Comment author: SilasBarta 01 August 2010 08:05:03PM *  2 points [-]

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I agree in the sense that full completion of Level 2 isn't necessary to do what I've described, as that implies a very deeply-connected set of models, truly pervading everything you know about.

But at the same time, I don't think you appreciate some of the hurdles to the teaching task I described: remember, the only assumption is that the student has lay knowledge and is reasonably intelligent. Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks, for example) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Comment author: fiddlemath 01 August 2010 08:20:15PM 3 points [-]

Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I agree that the teaching task does require a thick bundle of connections, and not just a single chain of inferences. So much so, actually, that I've found that teaching, and preparing to teach, is a pretty good way to learn new connections between my Level 1 knowledge and my world model. That this "rounds" to Level 2 depends, I suppose, on how intelligent you assume the student is.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Yes, constantly. Frequently, I'm frustrated by such presentations to the point of anger at the author's apparent disregard for the reader, even when I understand what they're saying.

Comment author: NancyLebovitz 01 August 2010 08:41:53PM 2 points [-]

I think I know a fair amount about doing calligraphy, but I'm dubious that someone could get a comparable level of knowledge without doing a good bit of calligraphy themselves.

If I were doing a serious job of teaching, I would be learning more about how to teach as I was doing it.

I consider myself to be a good but not expert explainer.

Possibly of interest: The 10-Minute Rejuvenation Plan: T5T: The Revolutionary Exercise Program That Restores Your Body and Mind : a book about an exercise system which involves 5 yoga moves. It's by a woman who'd taught 700 people how to do the system, and shows an extensive knowledge of the possible mistakes students can make and adaptations needed to make the moves feasible for a wide variety of people.

My point is that explanation isn't an abstract perfectible process existing simply in the mind of a teacher.

Comment author: KrisC 01 August 2010 09:35:47PM 3 points [-]

But in some limited areas explanation is completely adequate.

I taught co-worker how to do sudoku puzzles. After teaching him the human-accessible algorithms and allowing time for practice, I was still consistently beating his time. I knew why, and he didn't. After I explained the difference in mental state I was using, he began beating my time on regular basis. {Instead of checking the list of 1-9 for each box or line, allow your brain to subconsciously spot the missing number and then verify its absence.} He is more motivated and has more focus, while I do puzzles to kill time when waiting.

In another job where I believe I had a thorough understanding of the subject, I was never able to teach any of my (~20) trainees to produce vector graphic maps with the speed and accuracy I obtained because I was unable to impart a mathematical intuition for the approximation of curves. I let them go home with full pay when they completed their work, so they definitely had motivation. But they also had editors who were highly detail oriented.

I mean to suggest that there is a continuum of subjective ability comparing different skills. Sudoku is highly procedural, once familiar all that is required is concentration. Yoga, in the sense mentioned above, is also procedural, proscriptive; the joints allow a limited number of degrees of freedom. Calligraphy strives for an ideal, but depending on the tradition, there is a degree of interpretation allowed for aesthetic considerations. Mapping, particularly in vector graphics, has many ways to be adequate and no way to be perfect.

The number of acceptable outcomes and the degree of variation in useful paths determines the teach-ability of a skillset. The procedural skills can be taught more easily than the subjective, and practice is useful to accomplish mastery of procedural skills. Deeper understanding of a field allows more of the skill's domain to be expressed procedurally rather than subjectively.

Comment author: NancyLebovitz 01 August 2010 09:44:59PM 0 points [-]

I'm in general agreement, but I think you're underestimating yoga-- a big piece of it is improving access to your body's ability to self-organize.

I like "many ways to be adequate and no way to be perfect". I think most of life is like that, though I'll add "many ways to be excellent".

Comment author: KrisC 01 August 2010 09:50:19PM 0 points [-]

No slight to yoga intended. I only wanted to address the starting point of yoga. I know it is a quite comprehensive field.

Comment author: JRMayne 01 August 2010 11:07:11PM 1 point [-]

Criminal Law: Yes to Level 2. Yes to teaching a layperson. It would take a while, for sure, but it's doable. Some of the work requires an understanding of a different lifestyle; if you can't see the potential issues with prosecuting a robbery by a prostitute and her armed male friend, or you can't predict that a domestic violence victim will have a non-credible recantation, you'll need some other education.

I've done a lot of instruction in this field. It is common for instruction not to take until there's other experience in the field which helps things join up.

Bridge: Yes to Level 2. Possibly to teaching a layperson. The ability to play bridge well is correlated heavily to intelligence, but it also correlates to a certain zeal for winning. I have taught one person to play very well indeed, but that may not be replicable, and took years. (On an aside, I am very likely the world's foremost expert on online bridge cheating; teaching cheating prevention would require teaching bridge first.)

Teaching requires more than reasonable intelligence on the part of the teachee. Some people who are very intelligent are ineducable. (Many of these are violators of my 40% rule: You are allowed to think you are 40% smarter/faster/stronger/better than you are. After that, it's obnoxious.) Some people are not interested in learning a given subject. Some people will not overcome preset biases. Some people have high aptitudes in some areas and little aptitude in others (though intelligence strongly tends to spill over.)

Anyway, I'm interested in the article. My penultimate effort to explain something to many people - Bayes' Theorem to lawyers - was a moderate failure; my last effort to explain something less mathy to a crowd was a substantial success. (My last experience in explaining something, with assistance, to 12 people was a complete failure.)

--JRM

Comment author: DSimon 01 August 2010 11:16:54PM 2 points [-]

I'm curious, why did you chose 40% for your "40% rule"?

Comment author: JRMayne 02 August 2010 04:09:23AM 2 points [-]

It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low.

And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't.

Some are nonplussed by the rule, but the vigor of support by some supporters gives me some thought that I picked a number people like. Since I never tried another number, I could be wrong - but I don't think I am.

--JRM

Comment author: SilasBarta 02 August 2010 02:28:07PM 1 point [-]

Some of the work requires an understanding of a different lifestyle; if you can't see the potential issues with prosecuting a robbery by a prostitute and her armed male friend, or you can't predict that a domestic violence victim will have a non-credible recantation, you'll need some other education.

  • "The people who buy the services of a prostitute generally don't want to go on record saying so, which they would have to do at some point to prosecute such a robbery. This is either because they're married, or the shame associated with using one."

  • "Victims of domestic violence have a lot invested in the relationship, and, no matter how much they feel hurt by the abuse, they will not want to tear apart the family and cripple their spouse with a felony conviction. This inner conflict will be present when the victim tries to recant their testimony."

Did that really require passing the learner off for some other education? Or did I get the explanation wrong?

Anyway, I'm interested in the article. My penultimate effort to explain something to many people - Bayes' Theorem to lawyers - was a moderate failure; my last effort to explain something less mathy to a crowd was a substantial success. (My last experience in explaining something, with assistance, to 12 people was a complete failure.)

I'd actually tried teaching information theory to my mom a week ago, which involved starting with the Bayes Theorem (my preferred phrasing [1]). She's a professional engineer, and found it very interesting (to the point where she kept prodding me for the next lesson), saying that it made much more sense of statistics. In about 1.5-2 hours total, I covered the Theorem, application to a car alarm situation, aggregating independent pieces of evidence, the use of log-odds, and some stuff on Bayes nets and using dependent pieces of evidence.

[1] O(H|E) = O(H) * L(E|H) = O(H) * P(E|H) / P(E|~H) = "On observing evidence, amplify the odds you assign to a belief by the probability of seeing the evidence if the belief were true, relative to if it were false."

Comment author: NancyLebovitz 02 August 2010 03:38:01PM 2 points [-]

Expansion on the explanation about domestic violence victims-- the victim may also be afraid that the government will not protect them from the abuser, and the abuser will be angrier because of the attempt at prosecution.

Comment author: DSimon 01 August 2010 11:28:19PM *  0 points [-]

Computer programming: I'm not sure if I am at Level 2 or not on this.

In favor of being at Level 2: I regularly think about non-computer-related topics with a CS-like approach (i.e. using information theory ideas when playing the inference game Zendo).

Also, I strongly associate my knowledge of "folk psychology" and "folk science" to computer science ideas, and these insights work in both directions. For example, the "learned helplessness" phenomenon, where inexperienced users become so uncomfortable with a system that they prefer to cling to their inexperienced status than to risk failure in an attempt to understand the system better, appears in many areas of life having nothing directly to do with computers.

Evidence against being at Level 2: I do not have the necessary computer engineering knowledge to connect my understanding of computer programming to my understanding of physics. And, although I have not tried this very often, my experiments in attempting to teach computer programming to laypeople have been middling at best.

My assessment at this point is that I am probably near to Level 2 in computer programming, but not quite there yet.

Comment author: zero_call 02 August 2010 12:48:50AM *  2 points [-]

I will reply to this in the sense of

"do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?",

since I am not so familiar with the formalism of a "Level 2" understanding.

My uninteresting, simple answer is: yes.

My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to be a contradiction because high-level teaching skill seems to be a sufficient, and possibly necessary condition for masterful understanding.

Personally I resolve this contradiction in the following way. I feel like my own limitations make it to where I am forced to learn a subject by progressing at it in very simplistic strokes. By the time I have reached a mastery, I feel very capable of teaching it to others, since I have been forced to understand it myself in the most simplistic way possible.

Other people, who are possibly quite brilliant, are able to master some subjects without having to transmute the information into a simpler level. Consequentially, they are unable to make the sort of connections that you describe as being necessary for teaching.

Personally I feel that the latter category of people must be missing something, but I am unable to make a convincing argument for this point.

Comment author: SilasBarta 02 August 2010 01:15:29AM *  3 points [-]

A lot of the questions you pose, including the definition of the Level 2 formalism, are addressed in the article I linked (and wrote).

I classify those who can do something well but not explain or understand the connections from the inputs and outputs to the rest of the world, to be at a Level 1 understanding. It's certainly an accomplishment, but I agree with you that it's missing something: the ability to recognize where it fits in with the rest of reality (Level 2) and the command of a reliable truth-detecting procedure that can "repair" gaps in knowledge as they arise (Level 3).

"Level 1 savants" are certainly doing something very well, but that something is not a deep understanding. Rather, they are in the position of a computer that can transform inputs into the right outputs, but do nothing more with them. Or a cat, which can fall from great heights without injury, but not know why its method works.

(Yes, this comment seems a bit internally repetitive.)

Comment author: zero_call 02 August 2010 01:36:50AM *  1 point [-]

Ah, OK, I read your article. I think that's an admirable task to try to classify or identify the levels of understanding. However, I'm not sure I am convinced by your categorization. It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality. Actually it seems like the claim of "Level 1 understanding" basically trivializes that understanding. Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia. I would argue that these people have some further complications or issues which are not recognized in the 1-2-3 hierarchy.

That being said, you have to start somewhere, and the 0-1-2-3 hierarchy looks like a good place to start. I'd definitely be interested in hearing more about this analysis.

Comment author: SilasBarta 02 August 2010 03:09:06AM 2 points [-]

Thanks for reading it and giving me feedback. I'm interested in your claim:

It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality.

Well, they can fit it in the sense that they (over a typical problem set) can match inputs with (what reality deems) the right outputs. But, as I've defined the level, they don't know how those inputs and outputs relate to more distantly-connected aspects of reality.

Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia.

I had a discussion with others about this point recently. My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.

And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield? If not, I would call that falling short of Level 2.

Comment author: zero_call 02 August 2010 05:07:43AM *  5 points [-]

My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.

I would LOVE to agree with this statement, as it justifies my criticism of poor teachers who IMO are (not usually maliciously) putting their students through hell. However, I don't think it's obvious, or I think maybe you just have to take it as an axiom of your system. It seems there is some notion of individualism or personal difference which is missing from the system. If someone is just terrible at learning, can you really expect to succeed in explaining, for example? Realistically I think it's probably impossible to classify the massive concept of understanding by merely three levels, and these problems are just a symptom of that fact.

As another example, in order to understand something, it's clearly necessary to be able to explain it to yourself. In your system, you are additionally requiring that your understanding means you must be able to explain things to other people. In order to explain things to others, you have to understand them, as has been discussed. Therefore you have to be able to explain other people to yourself. Why should an explanation of other individuals behavior be necessary for understanding some random area of expertise, say, mathematics? It's not clear to me.

And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield?

It certainly seems like someone with a deep understanding of their subject should be able to identify the validity or uncertainty in their assumptions about the subject. If they are a poor teacher, I think I would still believe this to be true.

Comment author: RobinZ 02 August 2010 01:43:53AM 1 point [-]

I have some trouble answering your question, chiefly because my definition of "expert" is approximately synonymous with your definition of "Level 2".

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent layperson, one-on-one, to understand complex topics in your expertise, teaching them every intermediate topic necessary for grounding the hardest level?

"Enough time" would be quite a long period of time. One problem is that there are a lot of textbook results that I would have to use in intermediate steps that would take me a long time to derive. Another is that there are a lot of experimental parameters that I haven't memorized and would have to look up. But I think I could teach arithmetic, algebra, geometry, calculus, differential equations, and Newtonian physics enough that I could teach them proper engineering analysis.

Comment author: JanetK 02 August 2010 08:08:06AM 2 points [-]

I think I have level 2 understanding of many areas of Biology but of course not all of it. It is too large a field. But there are gray areas around my high points of understanding where I am not sure how deep my understanding would go unless it was put to the test. And around the gray areas surrounding the level 2 areas there is a sea of superficial understanding. I have some small areas of computer science at level 2 but they are fewer and smaller, ditto chemistry and geology. I think your question overlooks the nature of teaching skills. I am pretty good at teaching (verbally and one/few to one) and did it often for years. There is a real knack in finding the right place to start and the right analogies to use with a particular person. Someone could have more understanding than me and not be able to transfer that understanding to someone else. And others could have less understanding and transfer it better. Finally I like your use of the word 'understanding' rather than 'knowledge'. It implies the connectedness with other areas required to relate to lay people.

Comment author: Oscar_Cunningham 02 August 2010 03:10:47PM *  5 points [-]

I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes' Theorem:

However, each one is different, and forces a different intuitive understanding of Bayes' Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation.

Because of this way of understanding things, I find explanations easy, because if one method isn't working, another one will.

ETA: I'd love to see more versions of Bayes' Theorem, if anyone has any more to post.

Comment author: SilasBarta 02 August 2010 03:19:05PM *  0 points [-]

Very well said, and doubles as a reply to the last part of my comment here. (When I read your comment in my inbox, I thought it was actually a reply to that one! Needless to say, I my favorite versions of the theorem are the last two you listed.)

Comment author: thomblake 02 August 2010 06:49:57PM 0 points [-]

Hmm... I'm not sure if I think of myself as an expert at anything, other than when people ask. But I'm pretty sure I have about the best understanding of logic I can hope to have, and could explain virtually all of it to an attentive small child given sufficient time.

And I might be an expert at some sort of computer programming, though I can think of people much better at any bit of it that I can think of; at any rate, I am also confident I could teach that to anyone, or at least anyone who passes a basic test

Comment author: XFrequentist 01 August 2010 07:46:57PM *  21 points [-]

I'm intrigued by the idea of trying to start something like a PUA community that is explicitly NOT focussed on securing romantic partners, but rather the deliberate practice of general social skills.

It seems like there's a fair bit of real knowledge in the PUA world, that some of it is quite a good example of applied rationality, and that much of it could be extremely useful for purposes unrelated to mating.

I'm wondering:

  • if this is an interesting idea to LWers?
  • if this is the right venue to talk about it?
  • does something similar already exist?

I'm aware that there was some previous conversation around similar topics and their appropriateness to LW, but if there was final consensus I missed it. Please let me know if these matters have been deemed inappropriate.

Comment author: cousin_it 01 August 2010 08:02:44PM *  4 points [-]

Toastmasters?

General social skills are needed in business, a lot of places teach them and they seem to be quite successful.

Comment author: SilasBarta 01 August 2010 08:08:59PM 5 points [-]

From my limited experience with Toastmasters, it's very PC and targeted at median-level intelligence people -- not the thing people here would be looking for. "PUA"-like implies XFrequentist is considering something that is willing to teach the harsh, condemned truths.

Comment author: XFrequentist 01 August 2010 08:30:25PM *  5 points [-]

I went to a Toastmasters session, and was... underwhelmed. Even for public speaking skills, the program seemed kind of trite. It was more geared toward learning the formalities of meetings. You'd probably be a better committee chair after following their program, but I'm not sure you could give a great TED talk or wow potential investors.

Carnegie's program seems closer to what I had in mind, but I want to replicate both the community aspect and the focus on "field" practice of the PUAs, which I suspect is a big part of what makes them so formidable.

Comment author: NancyLebovitz 01 August 2010 08:49:11PM 1 point [-]

I've heard smart people speak well of Toastmasters. It may be a matter of local variation, or it may be that Toastmasters is very useful for getting past fear of public speaking and acquiring adequate skills.

Comment author: XFrequentist 01 August 2010 09:11:17PM *  2 points [-]

My impression could easily be off; I only went to one open house.

It wasn't all negative. They seemed to have a logical progression of speech complexity, and quite a standardized process for giving feedback. Some of the speakers were excellent. It was fully bilingual (English/French), which was nice.

I don't think it's what I'm looking for, but it's probably okay for some other goals.

Comment author: pjeby 02 August 2010 03:43:22AM 1 point [-]

I've heard smart people speak well of Toastmasters.

I've mostly heard them damn it with faint praises, as being great for polishing presentation skills, but not being particularly useful for anything else.

Interestingly enough, of people I know who are actually professional speakers (in the sense of being paid to talk, either at their own events or other peoples'), exactly none of them recommend it. (Even amongst ones who do not sell any sort of speaker training of their own.)

OTOH, I have heard a couple of shout-outs for the Carnegie speaking course, but again, this is all just in the context of speaking... which has little relationship to general social skills AFAICT.

Comment author: XFrequentist 02 August 2010 02:00:39PM *  1 point [-]

Interesting, that jibes* pretty well with my impressions of Toastmasters.

There are other Carnegie courses than the speaking one. This is the one I was thinking of.

*See comment below for the distinction between "jives" and "jibes". It ain't cool beein' no jive turkey!

Comment author: NancyLebovitz 02 August 2010 02:41:26PM *  3 points [-]

Nitpick: "jibes" means "is consistent with".

"Jives" means "is talking nonsense" or (archaic) "dances".

{Tries looking it up} Wikipedia says "jives" can be a term for African American Vernacular English. The Urban Dictionary gives it a bunch of definitions, including both of mine, "jibe", and forms of African American speech which include a lot of slang, but not any sort of African American speech in general.

On the other hand, the language may have moved on-- I keep seeing that mistake (the Urban Dictionary implies it isn't a mistake), and maybe I should give up.

I still retain a fondness for people who get it right.

Comment author: JanetK 02 August 2010 07:28:20AM 1 point [-]

I belonged to TM for many years and I would still if there was a club near me. I found it great for many reasons. But I have to say that you get what you put in. And you get what you want to get. If you want friends and social graces - OK get them. If you want to lose fear of speaking - get that. Ignore what you don't want and take what you do.

Comment author: D_Alex 02 August 2010 01:33:54AM 2 points [-]

The clubs vary in their standard. I recommend you try a few in your area (big cities should have a bunch). For 2 years I used to commute 1 hour each way to attend Victoria Quay Toastmasters in Fremantle, it was that good. It was the 3rd club I tried after moving.

Comment author: XFrequentist 01 August 2010 08:32:13PM 1 point [-]

a lot of places teach them

I'd be interested in specifics...

Comment author: katydee 01 August 2010 08:32:51PM 1 point [-]

Extremely, yes, not to my knowledge.

Comment author: ianshakil 02 August 2010 05:33:44AM 1 point [-]

Would such "practice" require a physical venue? -- or would an online setting -- maybe even Skype -- be sufficient?

Comment author: XFrequentist 02 August 2010 01:49:14PM 0 points [-]

That's a good question. I don't know, but I suspect a purely online setting would be adequate for beginners, but insufficient for mastery.

What do you think?

Comment author: ianshakil 02 August 2010 03:20:15PM 0 points [-]

Generally, I agree. There's a time and a place for both online and offline venues.

Ideally, you'd want a very large number of participants such that, during sessions, most of your peers are new and the situation is somewhat anonymous/random. If your sessions are with the same old people, these people will become well known -- perhaps friends, and the social simulation won't be very meaningful. Who knows.. maybe there's a way to piggyback on the Chatroulette concept?!

Comment author: marc 02 August 2010 03:20:34PM 0 points [-]

I don't think you'd have much success mastering non verbal communication through skype.

Comment author: marc 02 August 2010 03:22:29PM 0 points [-]

I think you're probably correct in your presumptions. I find it an interesting idea and would certainly follow any further discussion.

Comment author: Violet 03 August 2010 06:34:15AM *  5 points [-]

If you want non-PC approaches there are two communities you could look into: sales-people and conning people. The second one actually has most of the how-to-hack-peoples minds. If you want a kinder version look at it titled "social engineering".

Comment author: Johnicholas 01 August 2010 07:52:17PM 12 points [-]

Cryronics Lottery.

Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."

It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".

Comment author: gwern 02 August 2010 04:31:39AM 4 points [-]

I doubt it. Signing up for a lottery for cryonics is still suspicious. There is only one payoff, and that is of the suspicious thing. No one objects to the end of lotteries because we all like money, what is objected to is the lottery as efficient means of obtaining money (or entertainment).

Suppose that the object were something you and I regard with equal revulsion as many regard cryonics. Child molestation, perhaps. Would you really regard someone buying a ticket as not being quite evil and condoning and supporting the eventual rape?

Comment author: AlexM 02 August 2010 10:23:00AM 5 points [-]

Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool.

For example, look at discussions when Britney Spears http://www.freerepublic.com/focus/f-chat/2520762/posts

wanted to be frozen. Lots of derision, no hatred.

Comment author: gwern 02 August 2010 11:16:36AM 0 points [-]

Does the fact that my specific example may not be perfect refute my point that mere indirection & chance does not eliminate all criticism and this can be understood by merely introspecting one's intuitions?

Comment author: NihilCredo 02 August 2010 07:00:41PM 2 points [-]

Bad example. People want to make fun of celebrities (especially a community as caustic and "anti-elitist" as the Freepers). She could have announced that she was enrolling in college, or something else similarly common-sensible, and you would still have got a threadful of nothing but cheap jokes.

A discussion about "My neighbour / brother-in-law / old friend from high school told me he has decided to get frozen" would be more enlightening.

Comment author: Johnicholas 02 August 2010 11:01:48AM 0 points [-]

Rather than using an undiluted negative as an example, suppose that there was something more arguable, that might have some positive aspects - sex segregation of schools, for example.

Assuming that my overall judgement of sex segregation is negative, if someone pursued sex segregation fiercely and dedicatedly, then my overall negative valuation of their goal would color my judgement of them. If they can plausibly claim to have supported it momentarily on a whim, while thinking about the positive aspects, then there is some insulation between my judgement of the goal and my judgement of the person.

Comment author: NihilCredo 02 August 2010 07:14:24PM 7 points [-]

On a completely serious, if not totally related, note: it would be a lot easier to convince people to sign up for cryonics if the Cryonics Institute's and/or KrioRus's websites looked more professional.

Comment author: Alicorn 02 August 2010 08:43:19PM 6 points [-]

I'm not sure if it would help get uninterested people interested; but I think it would help get interested people signed up if there were a really clear set of individually actionable instructions - perhaps a flowchart so they can depend on individual circumstances - that were all found in one place.

Comment author: katydee 02 August 2010 09:01:18PM 2 points [-]

And Rudi Hoffman's page.

Comment author: andreas 01 August 2010 10:35:55PM 4 points [-]
Comment author: ciphergoth 02 August 2010 03:16:34PM 0 points [-]

Thanks to the two people who pointed this out to me in DM. I've commented, though Cyan has already linked to the essays on my blog I'd link to first.

Comment author: ciphergoth 03 August 2010 06:09:14AM 0 points [-]
Comment author: zero_call 02 August 2010 01:03:32AM *  1 point [-]

Suppose that inventing a recursively self improving AI is tantamount to solving a grand mathematical problem, similar in difficulty to the Riemann hypothesis, etc. Let's call it the RSI theorem.

This theorem would then constitute the primary obstacle in the development of a "true" strong AI. Other AI systems could be developed, for example, by simulating a human brain at 10,000x speed, but these sorts of systems would not capture the spirit (or capability) of a truly recursively self-improving super intelligence.

Do you disagree? Or, how likely is this scenario, and what are the consequences? How hard would the "RSI theorem" be?

Comment author: JoshuaZ 02 August 2010 01:11:27AM *  2 points [-]

This seems like a bad analogy. If you could simulate a group of smart human going at 10,000 times normal speed, say copies of Steven Chu or of Terry Tao, I'd expect that they'd be able to figure out how to self-improve pretty quickly. In about 2 months they have had about 5000 years worth of time to think about things. The human brain isn't a great structure for recursive self-improvement (while some aspects are highly modular, other aspects are very much not so) but given enough time one could work on improving that architecture.

Comment author: Tiiba 02 August 2010 05:44:06AM 1 point [-]

I heard in a few places that a real neuron is nothing like a threshold unit, but more like a complete miniature computer. None of those places expanded on that, though. Could you?

Comment author: JanetK 02 August 2010 08:53:01AM 7 points [-]

I am not sure that I understand the exact difference between a threshold unit and a miniature computer that you want to shine a light on. Below are some aspects that may be of use to you:

  1. The whole surface of a neuron is not at the same potential (relative to the firing potential). Synapses are many (can be thousands), along branching dendrites, of different types and strengths so that the patterns of input that will cause firing from the dendrite end of the neuron are varied and a large number. This architecture is plastic with synapses being strengthened and weakened by use and by events far away from the neuron. At the axon root synapses are fewer and their effect more individual. If the neuron fires it delivers its signal to many synapses on many other neurons. Again this is plastic.

  2. The potential of the neuron surface is affected by many other things besides other neurons through the synapses. It is affected by electrical and magnetic fields generated by the whole brain (the famous waves etc.). It is affected by the chemical environment such as the concentration of calcium ions. These factors are very changeable depending on the activity of the whole brain, hormone levels, general metabolism etc.

  3. It is fairly easy to draw possible configurations of 1 or 2 neurons that mimic all the logic gates, discriminators (like a long tailed pairs in old fashioned electronics), delay lines, simple memory and so on. But it is unlikely that this is the root to understanding how neurons act. Networks, parallel feedback loops, glial communities, modules, architecture of different parts of the brain and the like are probably a better level of investigation.

I hope this is of some help.

Comment author: Yoreth 02 August 2010 06:33:00AM 4 points [-]

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

Comment author: [deleted] 02 August 2010 06:40:46AM 2 points [-]

A dead tree copy of Wikipedia. A history book about ancient handmade tools and techniques from prehistory to now. A bunch of K-12 school books about math and science. Also as many various undergraduate and postgraduate level textbooks as possible.

Comment author: sketerpot 02 August 2010 07:18:19AM 2 points [-]

A dead-tree copy of Wikipedia has been estimated at around 1,420 volumes. Here's an illustration, with a human for scale. It's big. You might as well go for broke and hole up in a library when the Big Catastrophe happens.

Comment author: mstevens 02 August 2010 11:03:25AM 2 points [-]

One of these http://thewikireader.com/ with rechargeable batteries and a solar charger could work.

Comment author: NihilCredo 02 August 2010 06:52:01PM 3 points [-]

Until some critical part oxidates or otherwise breaks. Which will likely be a long time before the new society is able to build a replacement.

Comment author: JanetK 02 August 2010 11:30:32AM 4 points [-]

Wikipedia is a great answer because we know that most but no all the information is good. Some is nonsense. This will force the future generations to question and maybe develop their own 'science' rather than worship the great authority of 'the old and holy books'.

Comment author: JoshuaZ 02 August 2010 12:56:00PM 2 points [-]

The knowledge about science issues generally tracks our current understanding very well. And historical knowledge that is wrong will be extremely difficult for people to check post an apocalyptic event, and even then is largely correct. In fact, if Wikipedia's science content really were bad enough to matter it would be an awful thing to bring into this situation since having correct knowledge or not could alter whether or not humanity survives at all.

Comment author: Oscar_Cunningham 02 August 2010 11:43:05AM 3 points [-]

Wikipedia would also contain a lot of info about current people and places, which would no longer be remotely useful.

Comment author: NancyLebovitz 02 August 2010 03:21:13PM 1 point [-]

And a lot of popular culture which would no longer be available.

Comment author: mstevens 02 August 2010 11:07:44AM 1 point [-]

I'm tempted to say "a university library" as the short answer. More specifically, whatever I could get from the science and engineering departments. Pick the classic works in each field if you have someone to filter them. Look for stuff that's more universal than specific to the way we've done things - in computing terms, you want The Art of Computer Programming and not The C Programming Language.

In the short term, anything you can find on farming and primitive medicine - all the stuff the better class of survivalist would have on their bookshelf.

Comment author: RobinZ 02 August 2010 11:47:30AM *  3 points [-]

In rough order of addition to the corpus of knowledge:

  1. The scientific method.

  2. Basic survival skills (e.g. navigation).

  3. Edit: Basic agriculture (e.g. animal husbandry, crop cultivation).

  4. Calculus.

  5. Classical mechanics.

  6. Basic chemistry.

  7. Basic medicine.

  8. Basic political science.

Comment author: NancyLebovitz 02 August 2010 03:20:25PM 6 points [-]

Basic sanitation!

Comment author: RobinZ 02 August 2010 03:36:05PM 1 point [-]

Yes! Insert sanitation between 3 and 4, and insert construction (e.g. whittling, carpentry, metal casting) between sanitation and 3.

Comment author: JoshuaZ 02 August 2010 12:52:46PM *  8 points [-]

I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is the classical example of this. The inability to trade with the mainland caused large drops in tech level). So while Wikipedia makes sense, it would also be helpful to have a lot of details on do-it-yourself projects that could use pre-existing remnants of existing technology. There are a lot of websites and books devoted to that topic, so that shouldn't be too hard.

If we are reducing to a small population, we may need also to focus on getting through the first one or two generations with an intact population. That means that a handful of practical books on field surgery, midwifing, and similar basic medical issues may become very necessary.

Also, when you specify "ordinary men and women" do you mean who all speak the same language? And do you mean by "ordinary" roughly developed world countries? That's what many people seem to mean when questions like this are proposed. They could alter things considerably. For example, if it really is a random sample, then inter-language dictionaries will be very important. But, if the sample involves some people from the developing world, they are more likely to have some of the knowledge base for working in a less technologically advanced situation that people in the developed world will lack (even this may only be true to a very limited extent because the tech level of the developing world is in many respects very high compared to the tech level of humans for most of human history. Many countries described as developing world are in better shape than for example much of Europe in the Middle Ages.)

Comment author: arundelo 02 August 2010 02:55:02PM 3 points [-]

how much technology requires bootstrapping (metallurgy is a great example of this)

I would love to see a reality TV show about a metallurgy expert making a knife or other metal tool from scratch. The expert would be provided food and shelter but would have no equipment or materials for making metal, and so would have to find and dig up the ore themselves, build their own oven, and whatever else you would have to do to make metal if you were transported to the stone age.

Comment author: RobinZ 02 August 2010 05:42:12PM 2 points [-]

One problem you would face with such a show is if the easily-available ore is gone.

Comment author: JoshuaZ 03 August 2010 12:51:33AM *  1 point [-]

Yes, this is in fact connected to a general problem that Nick Bostrom has pointed out, each time you try to go back from stone age tech to modern tech you use resources up that you won't have the next time. However, for purposes of actually getting back to high levels of technology rather than having a fun reality show, we've got a few advantages. One can use the remaining metal that is in all the left over objects from modern civilization (cars being one common easy source of a number of metals). Some metals are actually very difficult to extract from ore (aluminum is the primary example of this. Until the technologies for extraction were developed, it was expensive and had almost no uses) whereas the ruins of civilization will have those metals in near pure forms if one knows where to look.

Comment author: xamdam 02 August 2010 04:00:55PM 0 points [-]

Depends what level you want to achieve post-catastrophe; some, if not most of your resources and knowledge will be needed to deal with specific effects. In short, your suitcase will be full of survivalist and medical material.

In an thought experiment where you freeze yourself until the ecosystem is restored, you can probably use an algorithm of taking the best library materials from each century, corrected for errors, to achieve the level of that century.

Both Robinson Crusoe and Jules Verne's "Mysterious Island" and explore similar bootstrapping scenarios, interestingly both use some "outside injections".

Comment author: ianshakil 02 August 2010 05:55:34PM *  0 points [-]

I only need one item:

The Holy Bible

(kidding)

Comment author: Eneasz 02 August 2010 06:05:51PM 1 point [-]

How to start a fire only using sticks.

How to make a cutting blade from rocks.

How to create a bow, and make arrows.

Basic sanitation.

Comment author: NancyLebovitz 02 August 2010 06:09:42PM 2 points [-]

That seems like advice for living in the woods-- not a bad idea, but it probably needs to be adjusted for different environments (find water in dry land, staying warm in extreme cold, etc.) and especially for scavenging from ruins.

Any thoughts about people skills you'd need after the big disaster?

Comment author: Eneasz 02 August 2010 06:45:58PM 0 points [-]

I thought about those a bit, but came to a few conclusions that made sense to me.

Being in a very dry land is simply a bad idea, best to move. Any group of survivors that is more than three days from fresh water won't be survivors, and once they've made it to the fresh water source there won't be many reasons to stray far from it for at least a couple generations, so water-finding skills will probably not be useful and be quickly lost.

Staying warm in extreme cold would be covered both by the fire-starting skills and the bow-making skills.

I wanted to put something about people skills, but I don't have any myself and didn't know what I could possibly say that would be remotely useful. Hopefully someone with more experience on that subject will survive as well. :)

Comment author: [deleted] 02 August 2010 06:14:38PM 1 point [-]

Let's examine the problem in more detail: Different disaster scenarios would require different pieces of information, so it would help if you knew exactly what kind of catastrophe. However, if you can preserve a very large compendium of knowledge, then you can create a catalogue of necessary information for almost every type of doomsday scenario (nuclear war, environmental catastrophe, etc.) so that you will be prepared for almost anything. If the amount of information you can save is more limited, then you should save the pieces of information that are the most likely to be useful in any given scenario in "catastrophe-space." Now we have to go about determining what these pieces of information are. We can start by looking at the most likely doomsday scenarios--Yoreth, since you started the thread, what do you think the most likely ones are?

Comment author: jimrandomh 02 August 2010 06:14:48PM 2 points [-]

Presupposing that only a limited amount of knowledge could be saved seems wrong. You could bury petabytes of data in digital form, then print out a few books' worth of hints for getting back to the technology level necessary to read it.

Comment author: NancyLebovitz 02 August 2010 06:32:12PM 1 point [-]

If the resources for printing are still handy. I don't feel comfortable counting on that at present levels of technology.

Comment author: KrisC 02 August 2010 08:23:33PM 4 points [-]

Maps.

Locations of pre-disaster settlements to be used as supply caches. Locations of structures to be used for defense. Locations of physical resources for ongoing exploitation: water, fisheries, quarries. Locations of no travel zones to avoid pathogens.

Comment author: sketerpot 02 August 2010 08:09:44AM 7 points [-]

I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.

Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanations of it really say much. Here's a representative snippet from Wikipedia:

László describes how such an informational field can explain why our universe appears to be fine-tuned as to form galaxies and conscious lifeforms; and why evolution is an informed, not random, process. He believes that the hypothesis solves several problems that emerge from quantum physics, especially nonlocality and quantum entanglement.

Then we have pages like this one, talking more about the Akashic Records (because apparently it's a quantum field thingy and also an infinite library or something). The very first sentence sums it up: "The Akashic Records refer to the frequency gird programs that create our reality." Okay, actually that didn't sum up crap; but it sounded cool, didn't it? That page is full of references to the works of various people, cited very nicely, and the spelling and grammar suggest someone with education. There are a lot of pages like this floating around. The thing they all have in common is that they don't seem to consider evidence to be important. It's not even on their radar.

Scholarly writings from New Age people is a pretty breathtaking example of dark side epistemology, if anybody wants a case study in exactly what not to do. It's pretty intense.

Comment author: [deleted] 02 August 2010 10:32:40AM *  4 points [-]

I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing). Is it not a good enough topic? Why not? Is it not in depth enough? And so on.

Is your graph complete?

The red gnomes are known to be the best arguers in the world. If you asked them whether the only creature that lived in the Graph Mountains was a Dwongle, they would say, “No, because Dwongles never live in mountains.”

And this is true, Dwongles never live in mountains.

But if you want to know the truth, you don’t talk to the red gnomes, you talk to the green gnomes who are the second best arguers in the world.

And they would say. “No, because Dwongles never live in mountains.”

But then they would say, “Both we and the red gnomes are so good at arguing that we can convince people that false things are true. Even worse though, we’re so good that we can convince ourselves that false things are true. So we always ask if we can argue for the opposite side just as convincingly.”

And then, after thinking, they would say, “We were wrong, they must be Dwongles, for only Dwongles ever live in places where no other creatures live. So we have a paradox and paradoxes can never be resolved by giving counter examples to one or the other claim. Instead of countering, you must invalidate one of the arguments.”

Eventually, they would say, “Ah. My magical fairy mushroom has informed me that Graph Mountain is in fact a hill, ironically named, and Dwongles often live in hills. So yes, the creature is a Dwongle.”

The point of all of that is best discussed after introducing a method of diagramming the reasoning made by the green gnomes. The following series of diagrams should be reasonably self explanatory. A is a proposition that we want to know the truth of (the creature in the Graph Mountains a Dwongle) and not-A is its negation (the creature in the Graph Mountains is not a Dwongle). If a path is drawn between a proposition and the “Truth” box, then the proposition is true. Paths are not direct but go through a proof (in this case P1 stands in for “Dwongles never live in mountains” and P2 stands in for “Only Dwongles live in a place where no other creatures live). The diagrams connect to the argument made above by the green gnome. First, we have the argument that it mustn’t be a Dwongle because of P1. The second diagram shows the green gnome realising that they have an argument that it must be a Dwongle too due to P2. This middle type of diagram could be called a “Paradox Diagram.”

Figure 1

Figure 1. The green gnomes process of argument.

In his book, Good and Real, Gary Drescher notes that paradoxes can’t be resolved by making more counterarguments (which would relate to the approach shown in figure 2 before, which when considered graphically is obviously not helpful, we still have both propositions being shown to be true) but rather, by invalidating one of the arguments. That’s what the green gnomes did when they realised that Graph Mountain was actually a hill and that’s what the final diagram in figure 1 shows the result of (when you remove a vertex, like P1, you remove all the lines connected to it as well).

Figure 2

Figure 2. Attempting to resolve a paradox via counter arguments rather than invalidation.

The interesting thing in all of this is that the first and third diagrams in figure 1 look very similar. In fact, they’re the same but simply with different propositions proven. And this raises something: It can be very difficult to tell the difference between an incomplete paradox diagram and a completed proof diagram. The difference between the two is whether you’ve tried to find an argument for the opposite of the proposition proven and, if you do find one, whether you’ve managed to invalidate that argument.

What this means is, if you’re not confident that your proof for a proposition is true, you can’t be sure that you’ve taken all of the appropriate steps to establish its truth until you’ve asked: Is my graph complete?

Comment author: gwern 03 August 2010 06:18:45AM 0 points [-]

I like this, but in Good and Real, Drescher's paradigm works because he then supplies a few examples where he invalidates a paradox-causing argument, and then goes on to apply this general approach. Asides from your dwarf hypothetical example, where do you actually check that your graph is complete?

Comment author: gwern 02 August 2010 11:11:39AM 2 points [-]

From the Long Now department: "He Took a Polaroid Every Day, Until the Day He Died"

My comment on the Hacker News page describes my little webcam script to use with cron and (again) links to my Prediction Book page.

Comment author: hegemonicon 02 August 2010 12:44:21PM 20 points [-]

The game of Moral High Ground (reproduced completely below):

At last it is time to reveal to an unwitting world the great game of Moral High Ground. Moral High Ground is a long-playing game for two players. The following original rules are for one M and one F, but feel free to modify them to suit your player setup:

  1. The object of Moral High Ground is to win.

  2. Players proceed towards victory by scoring MHGPs (Moral High Ground Points). MHGPs are scored by taking the conspicuously and/or passive-aggressively virtuous course of action in any situation where culpability is in dispute.

(For example, if player M arrives late for a date with player F and player F sweetly accepts player M's apology and says no more about it, player F receives the MHGPs. If player F gets angry and player M bears it humbly, player M receives the MHGPs.)

  1. Point values are not fixed, vary from situation to situation and are usually set by the person claiming them. So, in the above example, forgiving player F might collect +20 MHGPs, whereas penitent player M might collect only +10.

  2. Men's MHG scores reset every night at midnight; women's roll over every day for all time. Therefore, it is statistically highly improbable that a man can ever beat a woman at MHG, as the game ends only when the relationship does.

  3. Having a baby gives a woman +10,000 MHG points over the man involved and both parents +5,000 MHG points over anyone without children.

My ex-bf and I developed Moral High Ground during our relationship, and it has given us years of hilarity. Straight coupledom involves so much petty point-scoring anyway that we both found we were already experts.

By making a private joke out of incredibly destructive gender programming, MHG releases a great deal of relationship stress and encourages good behavior in otherwise trying situations, as when he once cycled all the way home and back to retrieve some forgotten concert tickets "because I couldn't let you have the Moral High Ground points". We are still the best of friends.

Play and enjoy!

From Metafilter

Comment author: NancyLebovitz 02 August 2010 03:19:17PM 4 points [-]

The whole thread is about relationship hacks-- it's fascinating.

Comment author: sketerpot 02 August 2010 06:59:35PM 4 points [-]

One of the first comments is something I've been saying for a while, about how to admit that you were wrong about something, instead of clinging to a broken opinion out of stubborn pride:

Try to make it a personal policy to prove yourself WRONG on occasion. And get excited about it. Realizing you've been wrong about something is a sure sign of growth, and growth is exciting.

The key is to actually enjoy becoming less wrong, and to take pride in admitting mistakes. That way it doesn't take willpower, which makes everything so much easier.

Comment author: zaph 02 August 2010 01:03:09PM 3 points [-]

I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

They didn't source the specific article, but it seems to be this one, published in <i>Nature Physics</i>. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html

This is all well above my paygrade. Is this all conceptual? Are the scientists involed anywhere near an experiment to verify any of this? In a word, huh?

Comment author: Vladimir_Nesov 02 August 2010 01:59:04PM -1 points [-]

I don't want this kind of items to be discussed on LW. It's either off-topic or crackpottery, irrelevant whatever the case.

Comment author: zaph 02 August 2010 02:53:24PM 4 points [-]

Considering the source was Nature, I doubt your analysis is correct. The researchers are from Ludwig-Maximilians-University and ETH Zürich, which appear to be respectable institutions. I found a write-up at Science Daily (http://www.sciencedaily.com/releases/2010/07/100727082652.htm) that provides some more details on the research. From that link:

"The teams at LMU and the ETH Zurich have now shown that the result of a measurement on a quantum particle can be predicted with greater accuracy if information about the particle is available in a quantum memory. Atoms or ions can form the basis for such a quantum memory.

The researchers have, for the first time, derived a formula for Heisenberg's Principle, which takes account of the effect of a quantum memory. In the case of so-called entangled particles, whose states are very highly correlated (i.e. to a degree that is greater than that allowed by the laws of classical physics), the uncertainty can disappear.

According to Christandl, this can be roughly understood as follows "One might say that the disorder or uncertainty in the state of a particle depends on the information stored in the quantum memory. Imagine having a pile of papers on a table. Often these will appear to be completely disordered -- except to the person who put them there in the first place."

This is one of the very few places online that I've seen thoughtful discussion on the implications of quantum mechanics, so I felt research that could impact quantum theory would be relevant.

Comment author: RobinZ 02 August 2010 05:16:17PM 2 points [-]

This is one of the very few places online that I've seen thoughtful discussion on the implications of quantum mechanics, so I felt research that could impact quantum theory would be relevant.

The discussion of quantum mechanics Eliezer Yudkowsky did was not because quantum mechanics is relevant to the interests of this community, but because the counterintuitive nature of quantum mechanics offered good case studies to use in discussing rationality.

Comment author: NancyLebovitz 02 August 2010 04:07:49PM 1 point [-]

Open source textbooks

I'm not sure if they're exactly open source-- what's in them is centrally controlled. However, they're at least free online.

Comment author: sketerpot 02 August 2010 06:54:28PM 0 points [-]

So many of the problems with typical education systems can be solved by moving to a really good computer-based education system. Lectures given by marginally qualified teachers could be replaced by videos of really good lectures from excellent teachers. We could avoid crappy textbooks. The system could adapt to the pace of each student, so that if they don't understand something, they can take extra time to learn it properly, and if they do understand something, they can go on to the next thing instead of waiting for the rest of the class to catch up. Human teachers could help students out, and do more interactive teaching because they would no longer have their time filled with lecturing and grading and miscellaneous crap work. Ideally. I wouldn't trust any existing education system to actually implement this in a sane way, of course.

(By the way, Curriki doesn't give links, but some of the course materials on there really are open source. For example, you can make a fork of Free High School Science Textbooks by going to their web site and snagging the LaTeX source code.)

Comment author: NancyLebovitz 02 August 2010 07:04:32PM 0 points [-]

I think the system would need more in the way of study groups than you're envisioning-- maybe even study groups that meet in person. And while multiple choice and short answer tests could be graded by computers, papers shouldn't be.

Other than that, I agree with what you've said.

Comment author: sketerpot 02 August 2010 08:31:52PM 1 point [-]

We don't actually disagree; I was envisioning lots of study groups (hopefully including many that meet in person), and you're obviously correct that computers wouldn't be able to grade anything too complicated. I just didn't communicate this effectively, since I was pressed for time.

I think it's important, if you're doing in-person study groups, that each student should have to answer questions in front of the rest of the class -- put them on the spot, both to wake them up and as an incentive to study so they don't look bad.

Here's a sketch of how a college professor teaching Intro to Newtonian Physics could revamp the class to get it half-way to educational heaven:

  1. The lectures are online, taught by someone who's really good at lecturing. There must be a way to play the videos at high speed. The 1.4x speed on BloggingHeads is about right, I think.

  2. Each week, students are assigned a set of topics to cover. These topics have associated lectures, readings, and (non-graded) homework problems. There may be some online short-answer quizzes to force people to keep up a reasonable pace.

  3. There are once- or twice-weekly discussion sections, where two things happen. Students ask any questions they've been wondering about; and they have to do problems. One way that works in this particular class is to put students in groups of one or two, give them sections of blackboard, and ask them to solve particular problems from the book. If they didn't watch the lectures and study, they will embarrass themselves in public. This also gives the opportunity for teachers to see what the problems are and help out.

  4. The class has got to have a discussion forum, preferably something minimally-painless, like the Reddit code with LaTeX math support, or a phpBB forum. And it's part of the teachers' job to participate. When I took physics, the forum was painful crap, but even then it got used to great effect. People actually had voluntary discussions of physics! And the teachers' help on the homework problems was nice.

  5. One hard homework problem per week, graded by hand. This should take two or three hours for decent students, and really force them to think.

Notice how much less time the teachers spend preparing lectures, and how much more convenient this is for everyone, since there are only one or two scheduled class times per week, and neither of those is an enormous faceless lecture section. This also offers a third option for students who would otherwise choose between coming to lectures and maybe falling asleep, or staying home and sleeping.

Now, this isn't the whole of my grand vision. It doesn't have any provision for students to proceed at different speeds. It doesn't necessarily allow for students to choose between different sets of lectures, or different textbooks, though that would be straightforward to add. The lectures don't necessarily come with interesting notes and Wikipedia links, though they could.

But I think this would be a big improvement on the current system, which is hella clunky and unpleasant.

Comment author: NancyLebovitz 02 August 2010 08:45:52PM 1 point [-]

I think it's important, if you're doing in-person study groups, that each student should have to answer questions in front of the rest of the class -- put them on the spot, both to wake them up and as an incentive to study so they don't look bad.

What's a good level of challenge for some would lead to paralyzing anxiety for others. One advantage to a mostly online system is that students can choose classes with policies that suit the way they learn.

Comment author: Matt_Simpson 02 August 2010 05:02:37PM *  8 points [-]

Was Kant implicitly using UDT?

Consider Kant's categorical imperative. It says, roughly, that you should act such that you could will your action as a universal law without undermining the intent of the action. For example, suppose you want to obtain a loan for a new car and never pay it back - you want to break a promise. In a world where everyone broke promises, the social practice of promise keeping wouldn't exist and thus neither would the practice of giving out loans. So you would undermine your own ends and thus, according to the categorical imperative, you shouldn't get a loan without the intent to pay it back.

Another way to put Kant's position would be that you should choose such that you are choosing for all other rational agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choosing for every agent running the same decision algorithm as yourself. It wouldn't be a stretch to call UDT agents rational. So Kant thinks we should be using UDT! Of course, Kant can't draw the conclusions he wants to draw because no human is actually using UDT. But that doesn't change the decision algorithm Kant is endorsing.

Except... Kant isn't a consequentialist. If the categorical imperative demands something, it demands it no matter the circumstances. Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.

So Kant isn't really endorsing UDT, but I thought the surface similarity was pretty funny.

Comment author: SilasBarta 02 August 2010 05:26:23PM *  2 points [-]

Drescher has some important things to say about this distinction in Good and Real. What I got out of it, is that the CI is justifiable on consequentialist or self-serving grounds, so long as you relax the constraint that you can only consider the causal consequences (or "means-end links") of your decisions, i.e., things that happen "futureward" of your decision.

Drescher argues that specifically ethical behavior is distinguished by its recognition of these "acausal means-end links", in which you act for the sake of what would be the case if-counterfactually you would make that decision, even though you may already know the result. (Though I may be butchering it -- it's tough to get my head around the arguments.)

And I saw a parallel between Drescher's reasoning and UDT, as the former argues that your decisions set the output of all similar processes to the extent that they are similar.

Comment author: Yvain 03 August 2010 07:39:32AM 1 point [-]

I thought Kant sounded a lot more like TDT than UDT. Or was that what you meant?

Comment author: Eneasz 02 August 2010 05:21:48PM *  8 points [-]

An ex-English Professor and ex-Cop, George Thompson, who now teaches a method he calls "Verbal Judo". Very reminiscent of Eliezer's Bayesian Dojo, this is a primer on rationalist communications techniques, focusing on defensive & redirection tactics. http://fora.tv/2009/04/10/Verbal_Judo_Diffusing_Conflict_Through_Conversation

Comment author: JenniferRM 02 August 2010 10:51:23PM 4 points [-]

Thanks. That was a compact and helpful 90 minutes. The first 30 minutes were OK, but the 2nd 30 were better, and the 3rd was the best. Towards the end I got the impression that he was explaining lessons that were the kind of thing people spend 5 years learning the hard way and that lots of people never learn for various reasons.

Comment author: Blueberry 02 August 2010 11:31:06PM 2 points [-]

That sounds really interesting. I wish there were a transcript available!

Comment author: EStokes 02 August 2010 05:31:47PM 3 points [-]

Are there any posts people would like to see reposted? For example, Where Are We seems like it maybe should be redone, or at least put a link in About... Or so I thought, but I just checked About and the page for introductions wasn't linked, either. Huh.

Comment author: thomblake 02 August 2010 06:29:30PM 3 points [-]

It would be nice if we had profile pages with machine-readable information and an interface for simple queries so posts such as that one would be redundant.

Comment author: timtyler 02 August 2010 06:16:31PM *  3 points [-]

I made some comments on the recently-deleted threads that got orphaned when the whole topic was banned and the associated posts were taken down. Currently no-one can reply to the comments. They don't related directly to the banned subject matter - and some of my messages survive despite the context being lost.

Some of the comments were SIAI-critical - and it didn't seem quite right to me at the time for the moderator to crush any discussion about them. So, I am reposting some of them as children of this comment in an attempt to rectify things - so I can refer back to them, and so others can comment - if they feel so inclined:

Comment author: timtyler 02 August 2010 06:16:58PM *  3 points [-]

They used to have a "commitment" that:

"Technology developed by SIAI will not be used to harm human life."

...on their web site. I probably missed the memo about that being taken down.

Comment author: timtyler 02 August 2010 06:17:15PM 6 points [-]

[In the context of SIAI folks thinking an unpleasant AI was likely]

The SIAI derives its funding from convincing people that the end is probably nigh - and that they are working on a potential solution. This is not the type of organisation you should trust to be objective on such an issue - they have obvious vested interests.

Comment author: Johnicholas 02 August 2010 07:42:03PM 2 points [-]

I've noticed this structural vulnerability to bias too - Can you think of any structural changes that might reduce or eliminate this bias?

Maybe SIAI ought to be offering a prize for substantially justified criticism of some important positional documents, as judged by some disinterested agent?

Comment author: timtyler 02 August 2010 08:20:25PM *  3 points [-]

They are already getting some critical feedback.

I think I made much the same points in my DOOM! video. DOOM mongers:

  • tend to do things like write books about THE END OF THE WORLD - which gives them a stake in promoting the topic ...and...

  • are a self-selected sample of those who think DOOM is very important (and so, often, highly likely) - so naturally they hold extreme views - and represent a sample from the far end of the spectrum;

  • clump together, cite each others papers, and enjoy a sense of community based around their unusual views.

It seems tricky for the SIAI to avoid the criticism that they have a stake in promoting the idea of DOOM - while they are funded the way they are.

Similarly, I don't see an easy way of avoiding the criticism that they are a self-selected sample from the extreme end of a spectrum of DOOM beliefs either.

If we could independently establish p(DOOM), that would help - but measuring it seems pretty challenging.

IMO, a prize wouldn't help much - but I don't know for sure. Many people behave irrationally around prizes - so it is hard to be very confident here.

I gather they are working on publishing some positional documents. It seems to be a not-unreasonable move. If there is something concrete to criticise, critics will have something to get their teeth into.

Comment author: NihilCredo 03 August 2010 03:15:25AM 0 points [-]

For the curious: DOOM!

Comment author: timtyler 02 August 2010 06:17:25PM *  2 points [-]

[In the context of SIAI folks thinking an unpleasant AI was likely]

Re: "The justification is that uFAI is a lot easier to make."

That seems like naive reasoning. It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Comment author: WrongBot 02 August 2010 06:26:03PM 3 points [-]

Those software companies test their products for crashes and loops. There is a word for testing an AI of unknown Friendliness and that word is "suicide".

Comment author: timtyler 02 August 2010 06:39:14PM *  4 points [-]

That just seems to be another confusion to me :-(

The argument - to the extent that I can make sense of it - is that you can't restrain an super-intelligent machine - since it will simply use its superior brainpower to escape from the constraints.

We successfully restrain intelligent agents all the time - in prisons. The prisoners may be smarter than the guards, and they often outnumber them - and yet still the restraints are usually successuful.

Some of the key observations to my mind are:

  • You can often restrain one agent with many stupider agents;
  • The restraining agents do not need to be humans - they can be other machines;
  • You can often restrain one agent with a totally dumb cage;
  • Complex systems can often be tested in small pieces (unit testing);
  • Large systems can often be tested on a smaller scale before deployment;
  • Systems can often be tested in virtual environments, reducing the cost of failure.

Discarding the standard testing-based methodology would be very silly, IMO.

Indeed, it would sabotage your project to the point that it would almost inevitably be beaten - and there is very little point in aiming to lose.

Comment author: WrongBot 02 August 2010 07:15:42PM 1 point [-]

Are you familiar with the AI-Box experiment? We can restrain human-intelligence level agents in prisons, most of the time. But the question to ask is: how effective was the first prison? Because that's the equivalent case.

None of the safety measures you propose are safe enough. You're underestimating the power of a recursively self-improving AI by a factor I can't begin to estimate--which is kind of the point.

Comment author: Vladimir_Nesov 02 August 2010 07:32:16PM 4 points [-]

A much stronger argument than all-powerful AIs suddenly escaping (which is still not without merit) is that AI will have an incentive to behave as we expect it to behave, until at some point we no longer control it. It'll try its best to pass all tests.

Comment author: WrongBot 02 August 2010 07:53:01PM 2 points [-]

I suppose I was mentally classifying that kind of behavior as an escape; you're right that it should be called out as a separate point of failure.

Comment author: Vladimir_Nesov 02 August 2010 08:08:21PM *  6 points [-]

My point is that "ai box experiment" communicates orders of magnitude less evidence about the danger of escaping AIs than people like to imply, and there are lots of stronger and simpler self-contained arguments such as the one I gave. (The overall danger is much greater than even that, because these are specific plots with an obvious villain, while reality is more subtle.)

Comment author: WrongBot 02 August 2010 08:18:24PM 1 point [-]

Ahhh, I see what you're getting at. Agreed.

Comment author: NihilCredo 03 August 2010 03:13:56AM 0 points [-]

For that matter, calling it an "experiment" is quite misleading.

Comment author: timtyler 02 August 2010 08:08:14PM *  2 points [-]

So: while it believes it is under evaluation it does its very best to behave itself?

Can we wire that belief in as a prior with p=1.0?

Comment author: timtyler 02 August 2010 08:04:49PM *  3 points [-]

It won't be the first prison - or anything like it.

If we have powerful intelligence that needs testing, then we can have powerful guards too.

The AI-Box experiment has human guards. Consequently, it has very low relevance to the actual problem. Programmers don't build their test harnesses out of human beings.

Safety is usually an economic trade off. You can usually have an lot of it - if you are prepared to pay for it.

Comment author: Morendil 02 August 2010 09:45:57PM 2 points [-]

It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Still, a lot of these "working products" are the output of a filtering process which starts from a random mess of ASCII that crashes or loops, and tweaks it until it's less obviously broken. (Most of the job of testing being, typically, left to the end user.)

Comment author: timtyler 02 August 2010 09:57:19PM *  1 point [-]

Sure. The point is that - to conclude that a target will be missed - it is not sufficient to observe how small it is. Programmers rountinely hit miniscule targets in search spaces. To make the case, you would also need to argue that those aiming at the target are not good marksmen.

Comment author: JGWeissman 02 August 2010 09:55:45PM *  2 points [-]

software companies still manage to ship working products.

Software companies manage to ship products that do sort of what they want, that they can patch to more closely do what they want. This is generally after rounds of internal testing, in which they try to figure out if it does what they want by running it and observing the result.

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

Comment author: timtyler 02 August 2010 10:23:06PM *  1 point [-]

Most programmers are supervised. So, this claim is hard to parse.

Machine intelligence has been under development for decades - and there have been plenty of patches so far.

One way of thinking about the process is in terms of increasing the "level" of programming languages. Computers already write most machine code today. Eventually humans will be able to tell machines what they want in ordinary English - and then a "patch" will just be some new instructions.

Comment author: Apprentice 02 August 2010 10:36:56PM 0 points [-]

A computer which understands human languages without problems will have achieved general intelligence. We won't necessarily be able to give it "some new instructions", or at least it might not be inclined to follow them.

Comment author: timtyler 03 August 2010 05:16:46AM 1 point [-]

Well, sure - but if we build them appropriately, they will. We should be well motivated to do that - people are not going to want to buy a bad robots, or machine assistants that don't do what we tell them. Consumers buying potentially-dangerous machines will be looking for saftey features - STOP buttons and the like. The "bad" projects are less likely to get funding or mindshare - and so have less chance of getting off the ground.

Comment author: WrongBot 03 August 2010 05:24:50AM 2 points [-]

Well, sure - but if we build them appropriately, they will.

You are assuming the very thing that is being claimed to be astonishingly difficult. You also don't seem to accept the consequences of recursive self-improvement. May I ask why?

Comment author: timtyler 03 August 2010 05:43:01AM *  2 points [-]

I was not "assuming" - I said "if"!

The issue needs evidence - and the idea that an unpleasant machine intelligence is easy to build is not - in itself - good quality evidence.

It is easier to build many things that don't work properly. A pile of scrap metal is easier to build than a working car - but that doesn't imply that automotive engineers produce piles of scrap.

The first manned moon rocket had many safety features - and in fact worked successfully the very first time - and then only a tiny handful of lives were at stake. If the claim is that safety features are likely to be seriously neglected, then one has to ask what reasoning supports that.

The fact that nice agents are a small point in the search space is extremely feeble evidence on the issue.

"The consequences of recursive self-improvement" seems too vague and nebulous to respond to. Which consequences.

I have written a fair bit about self-improving systems. You can see some of my views on: http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Comment author: WrongBot 03 August 2010 07:12:54AM -1 points [-]

As Vladimir Nesov pointed out, the first manned moon rocket wasn't a superintelligence trying to deceive us. All AGIs look Friendly until it's too late.

Comment author: timtyler 03 August 2010 07:25:04AM 0 points [-]

It is a good job we will be able to scan their brains, then, and see what they are thinking. We can build them with noses that grow longer whenever they lie if we like.

Comment author: NancyLebovitz 03 August 2010 07:14:22AM 0 points [-]

It's an interesting problem-- you might want a robot which will do what you tell it, or you might want a robot which will at least question orders which would be likely to get you into trouble.

Comment author: JGWeissman 02 August 2010 10:42:09PM 3 points [-]

Most programmers are supervised.

By other humans. If we program an AGI, then it will supervise all future programming.

Machine intelligence has been under development for decades - and there have been plenty of patches so far.

Machine intelligence does not yet approach human intelligence. We are talking about applying patches on a superintelligence.

and then a "patch" will just be some new instructions.

The difficulty is not in specifying the patch, but in applying to a powerful superintelligence that does not want it.

Comment author: timtyler 03 August 2010 05:23:13AM *  0 points [-]

All computer programming will be performed and supervised by engineered agents eventually. But so what? That is right, natural and desirable.

It seems as though you are presuming a superintelliigence which doesn't want to do what humans tell it to. I am sure that will be true for some humans - not everyone can apply patches to Google today. However, for other humans, the superintelligence will probably be keen to do whatever they ask of it - since it will have been built to do just that.

Comment author: rwallace 02 August 2010 10:54:04PM 2 points [-]

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

In fiction, yes. Fictional technology appears overnight, works the first time without requiring continuing human effort for debugging and maintenance, and can do all sorts of wondrous things.

In real life, the picture is very different. Real life technology has a small fraction of the capabilities of its fictional counterpart, and is developed incrementally, decade by painfully slow decade. If intelligent machines ever actually come into existence, not only will there be plenty of time to issue patches, but patching will be precisely the process by which they are developed in the first place.

Comment author: JoshuaZ 03 August 2010 02:40:43AM 3 points [-]

I agree somewhat with this as a set of conclusions, but your argument deserves to get downvoted because you've made statements that are highly controversial. The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control. There are arguments against such a possibility being likely, but this is not a trivial matter. Moreover, comparing the situation to fiction is unhelpful- just because something is common in fiction that's not an argument that such a situation can't actually happen in practice. Reversed stupidity is not intelligence.

Comment author: NihilCredo 03 August 2010 03:09:04AM *  2 points [-]

your argument deserves to get downvoted because you've made statements that are highly controversial

Did you accidentally pick the wrong adjective, or did you seriously mean that controversy is unwelcome in LW comment threads?

Comment author: ata 03 August 2010 03:18:33AM *  4 points [-]

I read the subtext as "...you've made statements that are highly controversial without attempting to support them". Suggesting that there will be plenty of time to debug, maintain, and manually improve anything that actually fits the definition of "AGI" is a very significant disagreement with some fairly standard LW conclusions, and it may certainly be stated, but not as a casual assumption or a fact; it should be accompanied by an accordingly serious attempt to justify it.

Comment author: xamdam 02 August 2010 07:11:07PM 2 points [-]

Wei Dai has cast some doubts on the AI-based approach

Assuming that it is unlikely we will obtain fully satisfactory answers to all of the questions before the Singularity occurs, does it really make sense to pursue an AI-based approach?

I am curious if he has "another approach" he wrote about; I am not brushed up on sl4/ob/lw prehistory.

Personally I have some interest in increasing intelligence capability on individual level via "tools of thought" kind of approach, BCI in the limit. There is not much discussion of it here.

Comment author: Wei_Dai 03 August 2010 01:53:16AM 4 points [-]

No, I haven't written in any detail about any other approach. I think when I wrote that post I was mainly worried that Eliezer/SIAI wasn't thinking enough about what other approaches might be more likely to succeed than FAI. After my visit to SIAI a few months ago, I became much less worried because I saw evidence that plenty of SIAI people were thinking seriously about this question.

Comment author: xamdam 03 August 2010 03:27:36AM 0 points [-]

I haven't seen any other approaches mentioned here specifically, would be interesting to hear what those thoughts are, if they are publishable.

I think there is a lot of room improving on Englebart's approach with modern tools. It may also be viewed as a booster to the FAI rocket, if it in crease productivity enough.

Comment author: taw 03 August 2010 12:42:50AM 1 point [-]

I've heard many times here that Gargoyles involved some interesting multilevel plots, but the first few episodes had nothing like it, just standard Disneyishness. Anyone recommendations which episodes are best of the series, so I can check them out without going through the boring parts?

Comment author: Pavitra 03 August 2010 05:28:02AM 1 point [-]

Has there ever been a practical proof-of-concept system, even a toy one, for futarchy? Not just a "bare" prediction market, but actually tying the thing directly to policy.

If not, I suggest a programming nomic (aka codenomic) for this purpose.

If you're not familiar with the concept of nomic, it's a little tricky to explain, but there's a live one here in ECMAScript/Javascript, and an old copy of the PerlNomic codebase here. (There's also a scholarly article [PDF] on PerlNomic, for those interested.)

Also, if you're not familiar with the concept of nomic, you don't read enough Hofstadter.

Comment author: Blueberry 03 August 2010 06:35:30AM 0 points [-]

I would love to see a LessWrong Nomic game.

Comment author: Douglas_Knight 03 August 2010 06:47:59AM 0 points [-]

Nomic is way too complicated for a toy futarchy. RH suggests that a test system should be a single decision tied to a single conditional market. In particular, he suggests a fire-the-CEO market. You might call "futarchy" any conditional prediction market that is sponsored by (or even just known to) the decision makers. I am not aware of any such examples, but I think most prediction markets are fairly secret, so I would not be terribly surprised if some exist.

Comment author: NancyLebovitz 03 August 2010 07:23:32AM *  1 point [-]

As an alternative to trying to figure out what you'd want if civilization fell apart, are there ways to improve how civilization deals with disasters?

If a first world country were swatted hard by a tsunami or comparable disaster, what kind of prep, tech, or social structures might help more than what we've got now if they were there in advance?