Open Thread: October 2009
Hear ye, hear ye: commence the discussion of things which have not been discussed.
As usual, if a discussion gets particularly good, spin it off into a posting.
(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (425)
This is just a comment I can edit to let people elsewhere on the Net know that I am the real Eliezer Yudkowsky.
10/30/09: Ari N. Schulman: You are not being hoaxed.
I'm Spartacus!
I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.
It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted down despite being important to the pursuit of rationality.
It was my understanding that karma should be an indicator of importance and a way of eliminating useless information not just a way of indicating that a post is popular. The popularity of a post is nearly meaningless when you have such a range of experience and inexperience on a blog such as this.
Just a thought feel free to disagree...
I think you're on to something - many commenters (myself included) probably vote based more on agreement or disagreement than on anything else, and this necessarily reinforces the groupthink. If we wanted to fix it, the way to go would be to define standard rules for upvoting and downvoting which reduced the impact of opinion. It cannot be eliminated - if someone says something stupid, for example, saying it should not be rewarded - but a set of clear guidelines could change the karma meter from a popularity score to a filter sorting out the material worth paying attention to.
I think a well-thought-out proposal of such a method could make a reasonable top-level post.
So, there's this set, called W. The non-emptiness of W would imply that many significant and falsifiable conjectures, which we have not yet falsified, are false. What's the probability that W is empty?
(Yep, it's a bead jar guess. Show me your priors. I will not offer clarification unless I find that there's something I meant to be clearer about but wasn't.)
How many is "many"?
I say 0.9.
Movie: Cloudy with a Chance of Meatballs - I took the kids to see that this week-end and it struck me as a fun illustration of the UnFriendly AI problem.
On reflection, I'm actually going to start spelling my first name again.
Hence this new account.
ADDENDUM: I mean, unless we have some name-change feature that I just couldn't find.
SECOND ADDENDUM: To anyone reading this on my userpage, you might be interested in my older comments.
Why? (If I may ask.)
I'll PM you.
I've been wishing we had one for a while -- I replicated my Reddit login without really thinking.
I guess you could implement one!
Regrettably my meager Python skills are not yet up to the task.
A welcome occasion to learn more?
Henry Markram's recent TED talk on cortical column simulation. Features philosophical drivel of appalling incoherence.
True but the Blue Brain project is still very interesting and is and hopefully will continue to provide interesting results. Whether you agree with his theory or not the technical side of what they are doing is very interesting.
Yes - this talk is truly appalling.
This post tests how much exposure comments to open threads posted "late" get. If you are reading this then please either comment or upvote. Please don't do both and don't downvote. When the next open thread comes, I'll post another test comment as soon as possible with the same instructions. Then I'll compare the scores.
If the difference is insignificant, a LW forum is not warranted, and open threads are entirely sufficient.
PS: If you don't see a test comment in the next open thread (e.g. I've gone missing), please do post one in my stead. Thank you.
Edit: Remember that if you don't think I deserve the karma, but still don't want to comment, you can upvote this comment and downvote any one or more of my other comments.
I am replying to this because I saw Nick Tarleton's comment in the recent comments panel, which Nick made because he saw ThomBlake's comment.
Of course, that sort of thing can in fact happen to a normal open thread comment, so it may still be a reasonable test.
I saw thomblake's comment, not this one.
It seems to me that forums provide a better experience for very long-running threads (though such discussions would often warrant top-level posts) (being able to re-root a comment thread under a new post would be a nice feature), and better indexing (ditto both parentheticals).
(FWIW, I tried to establish an unofficial forum for OB in 2008; a maximum of about five people ever used it.)
I wonder if anyone else is reading this...
You should probably make an explicit karma balance post for this.
Wow, there are a lot of people watching the "Recent Comments".
I saw the above quoted request (today, two weeks after it was made) because I saw RobinZ's reply to it (which was made today) at lesswrong.com/comments, got curious about the context of RobinZ's comment, then clicked on its "Parent" link.
Parenthetically, I do not like the idea of running part of this community on "web forum" software (e.g., phpBB) and will not participate unless I have to participate to continue to be part of the community.
i just crossed rhollerith's comment.
I read this comment. You may like to note that it was the first comment I saw (I always have my sort set to "Top") and it was quoted in the Google result for this thread, so I couldn't help but do so.
Check
I saw this comment show up in the Recent Comments bar.
I not only read it, I spotted a typo. I am the most awesome person ever.
I don't think this is true. One reason to want a forum is to maximize the total views of more narrowly focused posts. If I post a comment in an open thread and it is only of interest to a handful of people on here they might never see it. But if I post in a forum where the post is on the page longer and in a place on the forum indexed such that people with my interests can find it there is a greater likelihood that someone will respond. The proper comparison is between the views a forum post gets and the views an open thread comment gets- not between two open thread comments at different times of the month. Plus some people would like a space where they can post less complete ideas without worrying about getting hit with downvotes.
The way to decide this issue is really simple. Start a forum and see what happens.
(Edit: Also, this is my notice that I read the comment)
One reason against a forum that I can think of is that we'd rather we not say low quality things at all. Maybe we want to force us to put our karma on the line at all times. Maybe we want to deny all opportunity for chatting. Enforce high standards. Discipline ourselves.
Ack.
I'm reluctant to upvote you for making this test without a karma-equalizing mechanism in place. At the same time, I don't want to mess up your test by failing to reply at all when I did see this comment. So I'm writing this. I feel a little like my good nature has been abused.
Downvote one or more random comments of mine to balance things out.
I read the comments feed (and am annoyed that it regularly overflows the only-20-comments limit between checks).
There is a "Next" button. Also, this counts as my comment.
There is no "Next" button on the comments feed; while there is IIRC a RFC for a formalized "Next page" function, it is not widely implemented.
I do check 'recent comments'. Is this supposed to be creating a feedback loop?
We could see this as the upper bound on comments posted to an old open thread; it's possible that a comment be posted that is really good and invites comment, so logically you'd need to take into account the feedback loop it might cause (if you want to make any generalization about open threads).
I noticed Vlad's comment in the recent comments sidebar, and was curious. Make of that what you will.
Or I could ignore this, for obscurity of purpose.
I'm commenting only because I saw the comment in the sidebar and wondered who would be posting to a nigh-dead open thread.
Saw it, only because I happened to look at recent comments at the time.
For them's what are following LW comments but not current OB activity, Eliezer and Robin are getting into it about the necessity of Friendliness in future agents of superhuman intelligence right now.
Morpheus is fighting Neo!
Eliezer Yudkowsky and Andrew Gelman on Bloggingheads: Percontations: The Nature of Probability
I haven't watched it yet, but the set-up suggests it could focus a discussion, so should probably be given a top-level post.
I watched it and it ends abruptely so maybe Eliezer is trying to fix that. One interesting thing in the discussion was the Netflix challenge, unfortunately they didn't get much into it. Would a simpler method be able to solve it more efficiently?
We need a snappy name like "analysis paralysis" that is focused on people who spend all their time studying rather than doing. They (we) intend to do, but never fell like they know enough to start.
I came up with the following while pondering the various probability puzzles of recent weeks, and I found it clarified some of my confusion about the issues, so I thought I'd post it here to see if anyone else liked it:
Consider an experiment in which we toss a coin, to chose whether a person is placed into a one room hotel or duplicated and placed into a two room hotel. For each resulting instance of the person, we repeat the procedure. And so forth, repeatedly. The graph of this would be a tree in which the persons were edges and the hotels nodes. Each layer of the tree (each generation) would have equal numbers of 1-nodes and 2-nodes (on average, when numerous). So each layer would have 1.5 times as many outgoing edges as incoming, with 2/3 of the outgoing being from 2-nodes. If we pick a path away from the root, representing the person's future, in each layer we are going to have an even chance of arriving at a 1- or 2- node, so our future will contain equal numbers of 1- and 2- hotels. If we pick a path towards the root, representing the person's past, in each layer we have a 2/3 chance of arriving at a 2-node, meaning that our past contained twice as many 2-hotels as 1-hotels.
The Other Presumptuous Philosopher:
It begins pretty much as described here:
...except the simple experiment won't quite falsify one of the theories. You see, the experiment has a trillion different possible outcomes. If T1 is true, the outcome will be a specific possibility that scientists have already calculated. If T2 is true, the outcome will be a random one, distributed uniformly among all possibilities.
Well, the experiment is performed, and the result is the one that's consistent with both theories. For whatever reason, anthropic reasoning is pretty standard in this hypothetical universe, so now, not before but after the experiment, the two theories are considered to be pretty much on par with each other. Enter the Other Presumptuous Philosopher: "Hey guys, we can stop experimenting now, because I can already show to you now, using non-anthropic reasoning, that T1 is about a trillion times more likely to be true than T2!"
My point: the Presumptuous Philosopher argument, though a good argument against certainty of either anthropic or non-anthropic reasoning, isn't a good argument against anything else. It's about as good an argument as "If you think that's true, why don't you bet your life on it?"
I recently realized that I don't remember seeing any LW posts questioning if it's ever rational to give up on getting better at rationality, or at least on one aspect of rationality that a person is just having too much trouble with.
There have been posts questioning the value of x-rationality, and posts examining the possibility of deliberately being irrational, but I don't remember seeing any posts examining if it's ever best to just give up and stop trying to learn a particular skill of rationality.
For example, someone who is extremely risk-averse, and experiences severe psychological discomfort in situations involving risk, and who has spent years trying to overcome this problem with no success. Should this person keep trying to overcome the risk aversion, or just give up and never leave their comfort zone, focusing instead on strategies for avoiding situations involving risk?
yes, the "someone" I mention above is myself.
and yes, I am asking this hoping that the answer gives me an excuse to be lazy.
I'm surprised that noone gave the obvious answer yet, which is:
If overcoming the problem really is hopeless, then give up and focus on more productive things, otherwise keep trying.
If it isn't obvious whether it's hopeless or not, then do a more detailed cost/benefit analysis.
Still, I don't remember seeing any LW post that even mentioned that sometimes giving up is an acceptable option. Or maybe I just forgot, or didn't notice.
It's been hinted at a few times, usually in terms of "how to pick goals" rather than "when to give up on goals". AFAIK, never a top-level post of "maybe you should give up and do something easier and/or more productive toward other goals". I think it'd be valuable.
This is random and for all sorts of reasons possibly a bad idea- but have you ever thought about anti-anxiety medication? It might have side effects that turn you off of it but it could help you deal with high risk situations.
(I should disclaim: I'm not a doctor, my knowledge doesn't extend past personal experience and a cog sci minor. Obviously, not medical advice, etc.)
I personally didn't suggest it because it seemed like it's obvious to you, so the only interesting response would be to deny it for some good reason.
I would note that you shouldn't give up permanently. Maybe wait a year or a few, then see if you've grown in other ways that would make a new attempt more fruitful.
upvoted. good advice. thanks.
http://lesswrong.com/lw/gx/just_lose_hope_already/ ?
Yes, that link is relevant and helpful, thanks.
It's not specifically about giving up on overcoming a particular irrational behaviour, but I guess the same advice applies.
Well given that I don't know what you've actually tried its hard to say if I think you've exausted your options (though it sounds like this sort of think might be best served by professional therapy). But sure, if the situations is really that bleak (assuming you have outside confirmation of this) then yeah give it up. Work on something else. Does your psychological discomfort come with any risk? Or just when particular kinds of things are at risk?
Relatedly, has anyone thought about the relationship between rationality and psychotherapy? It just occurred to me that there might be a lot there.
It puts the 'R' in REBT.
Huh? You mean, like, psychotherapists are unusually irrational people? Or maybe that no rationalist would give any significant credence to any of the clinical psychology theory? Or maybe that a good rationalist will rarely need psychotherapy because their deduction skills are much better than most therapists? Please explain.
To be less snide, I find it quite unlikely that therapy would help PI significantly. (Of course, I know little of his/her specific circumstances.) I think a more fruitful course of action, if PI does want to overcome the problem*, would be to keep trying to overcome it directly, and meanwhile continue to form new free relationships with a variety of trusted people and see if they can help at all by providing insight or emotional support. Social networks are better than yellow book pages at finding people with relevant insights. And good friends are better than good therapists at emotional support.
* Which isn't to say that PI should keep trying.
It is possible that therapy isn't usually cost effective but I don't know of any study which suggests the therapist market is uniquely distorted. People pay a lot of money for a good therapists and therapists build their practice by way of references. I don't think I have to endorse Freudian psychoanalysis in order to think that talking to an experienced stranger about your problems might be helpful in ways that talking to friends wouldn't be. I don't know the details of peer's problem (and sorry peer, for hijacking this) but his risk aversion might extend to fear of losing social capital and being embarassed. If thats the case telling him to go make more friends and tell them about his problems seems to miss the point.
What I meant by a relationship between rationality and psychotherapy is that therapy often involves getting people to be happier by having them behave more rationally. It seems to me that that some of the methods and ideas discussed and used here could bear on therapeutic practice. Frankly, better than talking to friends for free (therapy from people you have other relationships is always going to be more complicated since there are all sorts of signaling and status issues that will get in the way of an honest dialog) would be talking to rationalist strangers for free. I imagine the Bayesian cult leaders of Eliezer's fiction could charge a nice fee for talking to people and helping them make life decisions free from bias and overcome akrasia. We've all recognized that a lot of the material that gets discussed here looks like less useless self-help. To me, that means that this material might also be less useless other-help.
I sort of doubt it- but it would be great if to know if there are any practicing therapists or social workers that read less wrong.
Certainly. I didn't get the impression that that was the case from his comment, but perhaps it is.
My main beef with therapy is that it's ineffective at this. (Not in all cases, but more likely in the case of LW members.) It's certainly a noble goal.
I think you're saying here that you don't have to endorse any particular methodology in order to think etc. I agree with the conditional, but I somewhat disagree with the consequent.
I write about my personal experience with therapy on my blog, which certainly informs my writings here.
I more or less agree with this. I was smarter than my therapist too but it was still helpful for three reasons. First, it forced me to recite my motives, reasons and feelings out loud which made me more conscious of them so that I could actually analyze and evaluate them. Second, the questions she asked prompted new thoughts that I wouldn't have had. Even if the premise of her questions was silly (she wasn't a Freudian but had a tendency to bring up my mother at inopportune times) it still brought forth helpful thoughts. Third, while she was behind me in IQ she had enough experience and knowledge of patterns of behavior to call me on my bullshit. In my experience (and as I understand it, in studies) intelligent people are especially good at rationalizing away behavior and channeling emotional reactions in weird, unhelpful directions.
Anyway thats what I got out of it. Eventually I think I reached a point of diminishing returns on it (once I could recognize patterns in my behavior paying money to have someone else do it did seem useless). I still have a problem of putting my conclusions about my own unhealthy, irrational behavior to good use, but that doesn't seem like the kind of thing anyone will be able to help me with.
You're definitely right that therapy is overall too ineffective- which is why I think it could benefit from the insights of this site. I actually think I could get a fair amount out of therapy with an extreme rationalist- and reading your blog it seems like your problem with therapists is that they're not enough like your average less wrong poster.
Hmm. Maybe I was born unusually introspective, because my therapists never deepened my analysis or called me on bullshit. My experience may be more atypical than I thought.
I haven't heard of those studies. I'd be interested in any references you have. I'm familiar with the correlation between intelligence and kookiness, but this sounds a bit different, though probably related.
Heh. Well, sort of. That and, maybe, that I'm just not cut out for therapy.
This doesn't look like a hijack to me. I haven't suggested therapy to Peer, probably because I'm pretty strongly biased against doing so, but now that I think about it, it may be useful to at least consider it.
Carry on. :)
I agree. There are things that are part of you, but that you pretty much have to treat as external facts. Some of those are qualities of your utility function, such as risk aversion. I would not even try to change those.
Others are about abilities, like emotional beviour, or akrasia of various kinds. Those you can try to change, but sometimes that is not possible, or would cost more than it is worth, and then you just accept them and concentrate on other things.
I was hoping this would get more of a response - Peer and I have spent a considerable bit of time talking about this, and it's gotten to the point where other perspectives would be useful.
My opinion is that it is, at a minimum, appropriate for someone in Peer's situation to accept the fact that they are nearly guaranteed to be overwhelmed by emotion, to the point of becoming dangerously irrational, in certain situations, and to take that fact into account in deciding what problems to try to tackle. And, I see it as irrational to feel guilty or panicky about not being able to do more.
Part of the problem, though, is that the risky situations Peer mentioned are SIAI-related, and he seems to see doing anything less than his theoretical best (without taking psychological issues into account) in that context as not just lazy but immoral in some sense.
Peer's comment is too vague and general for any meaningful response, and your comment doesn't add clarity ("Risky situations Peer mentioned are SIAI-related"?).
"Risk aversion"? In one interpretation it's a perfectly valid aspect of preference, not something that needs overcoming. For example, one can value sure-thing $1000 more than 11% probability at $10000.
I'm trying not to say anything here that's more Peer's business than mine, so I don't want to use real examples, and I'm not certain enough that I know the details of what's going on in Peer's head to make up examples, but it doesn't appear to be risk-aversion by that definition that's the problem. It's that when he's in what appears to him to be a high-stakes situation (and 'what appears to him to be' is very relevant there - this isn't a calculated response as far as I can tell, and being told by, for example, Michael Vassar that the risk in some situation is worth the reward is nearly useless), he panics, and winds up doing things that make the issue worse in some way - usually in the form of wasting a lot of energy by going around in circles and then eventually backing out of dealing with the situation at all.
Is this what's referred to as "choking under pressure"?
Yes, that seems like a reasonably accurate summary.
Everything Adelene has said so far is accurate.
Sorry, but I still haven't thought of a good example that wouldn't take too long to explain.
Another topic that Ade and I have been discussing is the difference between my idealized utility function (in which a major component is "maximize the probability that the Singularity turns out okay"), and whatever it is that actually controls my decisions (in which a major component is "avoid situations where my actions have a significant probability of making things worse")
(I think there was at least one LW post on the topic of the difference between these two utility functions, but I didn't find them after a quick search.)
So to answer Vladimir's question, in my idealized utility function, certainty is not inherently valuable, and I know that when faced with a choice between certainty and uncertainty, I should shut up and multiply. However, my actual utility function has a paralyzing inability to deal with uncertainty.
Other relevant details are:
*severe underconfidence
*lack of experience, common sense, and general sanity
*fear of responsibility
*an inability to deal with (what appear to be) high-stakes situations. A risk of losing $1000 is already enough to qualify as "paralyzingly high stakes".
Hmmm... Yeah, anxiety sucks.
You know, physiologically, fear and excitement are very similar. My Psychology 101 textbook mentioned an experiment in which experimental subjects who met a young woman in a situation where the environment was scary (a narrow bridge over a deep chasm) reported her as being more attractive than subjects who met her in a neutral setting. Many people are afraid of public speaking or otherwise performing before an audience. I'm something of an exception, because I find it exciting instead of scary. Maybe some practice at turning fear into excitement could help? I don't know exactly how to do that, but you could try watching scary movies, or riding roller coasters, or playing games competitively, or something like that.
Also, perhaps another possible way to deal is to not care as much about the outcome? Always look on the bright side of life, and all that. Maybe I've just read too much fiction and played too many video games, but it seems like things usually do tend to work out okay. After all, humanity did survive the Cold War without blowing itself up. I don't know how to do this, but if you think you could try to take a more abstract and less personal perspective on whatever is scaring you, it might help.
<tangent> CronoDAS, are you enjoying being useful in this context? Is it more fun than video games? If so, that's important information. Note it. </tangent>
Well, one way I do nothing is by reading LessWrong and other blogs, and posting comments. I tend to be hesitant to give authoritative advice about dealing with personal issues, as I'm probably more screwed up than average, but I can still make suggestions. I find it hard to imagine myself as a counselor of any kind, though.
As for "better than video games", sometimes yes, sometimes no. It depends a lot on the particular video game.
I feel it's a curiosity stopper to think of browsing the Internet as "doing nothing". You learn, you communicate, you help, you signal your expertise. Find better understanding of the gist of your motivation and turn it into a sustainable plan for driving your day-to-day activity (in particular for making some money).
It's not so much "doing nothing" as "something I do for no other reason than it's become part of my standard routine". I think I've become very much driven by habit; I have a tendency to keep playing a video game even after I've decided I don't like it very much and have plenty of others I could be playing.
To quote a friend of mine, 'it's pointless to doubt yourself. It only reduces what you can do.'
My meta-suggestion is to find things that you enjoy or care about (not the same thing) enough to put effort into handling them them better. Giving advice in general doesn't seem to fall into that category - I don't remember seeing you do it regularly, which is the only measure I really have access to - but you seemed pretty engaged in this case, so there may be an aspect of this situation that you care about more than you would care about a run-of-the-mill situation. If there is, and if you can figure out what it is, you can use that information to find more things of that type, which is likely to be useful - you run into that 'having something to protect' effect.
Dual n-back is a game that's supposed to increase your IQ up to 40%. http://en.wikipedia.org/wiki/Dual_n_back#Dual_n-back
Some think the effect is temporary, long-term studies underway. Still, I wouldn't mind having to practice periodically. I've been at it for a few days, might retry the Mensa test in a while. (I washed out at 113 a few years ago) Download link: http://brainworkshop.sourceforge.net/
It seems to make sense. Instead of getting a faster CPU, a cheap and easy fix is get more RAM. In a brain analogy, I've often thought of the "magic number seven," isn't there any way to up that number, have more working memory? Nicholas Negroponte said something like "Perspective is worth 50 IQ points." I think that's a scope fail, but good perspective, being able to hold more of the problem in your head, might be worth about 30 IQ points.
So it's been almost 2 years. Have you taken any IQ tests after practicing?
Sorry, hiatus. No haven't been tested recently, and slacked off on the DNB, it starts to feel monotonous, and frustrating, I couldn't break through D3B. I'll try and pick it up again when I figure out how to get it to work on Ubuntu.
Any progress since? (It seems to work fine for me on Debian.)
Took a crack at it again, just now worked out how to change directories in a terminal.
To shill my DNB FAQ: http://www.gwern.net/N-back%20FAQ
As to temporary: if it's temporary, it's a very long temporary. From personal experience it takes months for my scores to begin to decay more than a few percent, and other people have reported scores unaffected by breaks of weeks or months as well.
The more serious concern for people who want big boosts is that looking over the multiple IQ before-after reports I've collated, I have 2 general impressions: that DNB helps you think quicker, but not better, and that the benefit is limited to around +10-15 points max.
(On a personal note, ZoneSeek, if after a few weeks or months of N-backing you've risen at least 4 levels and you retake the Mensa test, I would be quite interested to know what your new score is.)
Bug alert: this comment has many children, but doesn't currently have a "view children" link when viewing this entire thread.
I've only been reading Open Threads recently, so forgive me if it's been discussed before.
A band called The Protomen just recently came out with their second rock opera of a planned triology of rock operas based on (and we're talking based on) the Megaman video game. The first is The Protomen: Hope Rides Alone, the second one is Act II: The Father of Death.
The first album tells the story of a people who have given up and focuses on the idea of heroism. The second album is more about creation of the robots and the moral struggles that occur. I suggest you start with: The Good Doctor http://www.youtube.com/watch?v=HP2NePWJ2pQ
Mini heuristic that seems useful but not big enough for a post.
To combat ingroup bias: before deciding which experts to believe, first mentally sort the list of experts by topical qualifications. Allow autodidact skills to count if they have been recognized by peers (publication, citing, collaboration, etc).
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.
Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?
If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.
The topic of what the goals of the AI should be has been discussed an awful lot.
I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.
Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do - and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong - Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine - J. Storrs Hall
...and probably hundreds of threads - perhaps search for "friendly" or "volition".
"Utilons" are a stand-in for "whatever it is you actually value". The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
See also: Coherent Extrapolated Volition
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: "I'm not interested in that, I want to know how intelligence works" or "I just want to make it work, I'm interested in the science behind it." And I think this attitude is pervasive. It is ignoring the subject.
Of course - which makes them useless as a metric.
Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.
Bayesian reasoning spotted in the wild at Language Log
More specifically, the Kullback-Leibler divergence, which is even awesomer.
Mind-killer warning.
What is the opinion of everyone here on this? It's an essay of sorts (adapted from a speech) making a case for a guaranteed minimum income.
There's a difference between activities that are inherently desirable to do, just because they are fun/interesting/challenging, and activities that people can become accustomed to and eventually even like. I imagine farming is one of the latter. While I can envision a good deal of farmers continuing on farming without the economic incentive to do so, I doubt the replacement rate would be high enough to continue feeding the world.
I also imagine that, even if you abolish money, people would just recreate it, or at least an elaborate bartering system. I know I would personally. Note that there would be just as much desire from the 'consumer' as the 'producer' to recreate currency. Consider, for example, a hypothetical bridge building group, that just likes going around and building bridges for the sake of it. They're the best, and are in high demand. The group is happy to just build bridges as they work their way across the country, until suddenly a city not on their short list contacts them saying, "We desperately need a bridge! We'll do anything! You could live like kings here for months if you just build us a bridge!" It's one thing to want to do something for the joy of it, without remuneration, it's entirely another to actively reject payment. Thus, the cycle starts over again.
The author addresses this. He's not particularly opposed to paying people to do things; he's opposed to people having to do paid work or starve. The existence of a GMI should make people less willing to do unpleasant jobs for relatively low wages, effectively reducing the supply of unskilled labor. If you can't automate away a job that most people don't like doing, then just pay people the new, higher market rate.
I'm in favor of providing food and health care to anyone that needs it. However, a GMI that rivals minimum wage would probably have much larger consequences, which I'm not convinced anyone could predict.
Awesome link, thanks! I'm not sure about a GMI in the form of money per se, but if there's a way to make it represent (as he suggests) "real wealth", instead of a potentially slow-to-adjust numerical value, then it could work.
Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.
Ted Williams seems to be having a tough time of it.
Alcor has posted a response to Larry Johnson's allegations.
I'm not sure what to think of Larry Johnson. Some of his claims are normal parts of Alcor's cryopreservation process, but dressed up to sound bad to the layperson. Other parts just seem so outrageous. A monkey wrench? An empty tuna can? Really? He claims that conditions were terrible, which is also unlikely. Alcor is a business and gets inspected by OSHA, the fire department, etc. They even offer free tours to the public. If conditions were so terrible, you'd think they'd have some environmental or safety violations. At the very least, some people who toured the facility would speak up.
The article also claims that Ted Williams was cryopreserved against his will, which is almost certainly not true. Alcor requires that you sign and notarize a last will and testament with two witnesses who are not relatives.
My thought of the day: An 'Infinite Improbability Drive' is slightly less implausible than a faster than light engine.
A link you might find interesting:
The Neural Correlates of Religious and Nonreligious Belief
Summary:
Religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks, scientists at UCLA and other universities have found. They used fMRI to measure signal changes in the brains of committed Christians and nonbelievers as they evaluated the truth and falsity of religious and nonreligious propositions. For both groups, belief (judgments of "true" vs "false") was associated with greater signal in the ventromedial prefrontal cortex, an area important for self-representation, emotional associations, reward, and goal-driven behavior. "While religious and nonreligious thinking differentially engage broad regions of the frontal, parietal, and medial temporal lobes, the difference between belief and disbelief appears to be content-independent," the study concluded. "Our study compares religious thinking with ordinary cognition and, as such, constitutes a step toward developing a neuropsychology of religion. However, these findings may also further our understanding of how the brain accepts statements of all kinds to be valid descriptions of the world."
Is there a complete guide anywhere to comment/post formatting? If so, it should probably linked on the "About" page or something. I can't figure out how to do html entities; is that possible?
There is a comment formatting page on the Wiki. The syntax description says that you can just write HTML entities in the comments directly, but apparently it doesn't work here: ©
On the other hand, simple copy-past from an entity list page works: ©
XKCD visits human enhancement.
What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new? It'd be nice if there were a non-threaded view. RSS feed?
Kaj's suggestion (http://lesswrong.com/comments/) is your best bet, but there is another option that might merit consideration: if you happen to know that a new relevant comment is likely to be authored by JohnJones or Sally_Smith, keep an eye on the following 2 non-threaded views:
http://lesswrong.com/user/JohnJones
http://lesswrong.com/user/Sally_Smith
Pages like those 2 have RSS feeds associated with them, BTW. But, yeah, it would be nice if there were more options.
Scanning through the new comments page is probably your best bet, though I wish there was a better solution.
Any ideas for a better solution? The devs are busy, but they're listening (and if the devs don't have time, the code is open).
My thought would be a "recent posts in your subscribed threads" kind of a feature, as they have on forums. In other words, an ability to add specific posts to a personal watchlist, and then have a page like the "new comments page" that only shows comments to posts on your watchlist.
Something like the playback-feature on Google Wave would rock. Some neat way to specify that you only want to see(or highlight) comments that were made after some specific time would also be nice.
Yes, that would rock. Unfortunately it's not a small feature.
My idea would be to just have a link to a article-specific "recent comments" page on each article.
(But if they're going to work on anything, they might want to work first on the bug I posted about elsewhere in this thread.)
Hmm… raised as Issue 194
I'll make my more wrong confession here in this thread: I'm a multiple worlds skeptic. Or at least I'm deeply skeptical of Egan's law. I won't pretend I'm arguing from any sort of deep QM understanding. I just mean in my sci-fi, what-if, thinking about what the implications would be. I truly believe there would be more wacky outcomes in an MWI setting than we see. And I don't mean violations of physical laws; I'm hung up on having to give up the idea of cause and effect in psychology. In MWI, I don't see how it's possible to think there would be cause and effect behind conversations, personal identity, etc. Literally every word, every vocalization, is determined solely by quantum interactions, unless I'm deeply misunderstanding something. This goes against the determinism I hold to be true. I don't see how my next words won't be French, Arabic, Klingon, etc, and I don't see how what I consider to be normally isn't vanishingly unlikely to continue for an indefinite period of time.
I'll admit that works been busy, so I haven't worked through EY's latest posts, so if there's been some resolution in this in the anthropic threads, I'd appreciate a quick summary. Sorry if this is more of a question than answer; it's for that reason that I second a forum. I like blogs for articles, but they don't work for discussion as well as forums do, and forums better allow people to post questions.
It's true that MWI doesn't absolutely rule out the possibility that your next words might be in another language, but neither does any other QM interpretation. They all predict just the amount of wackiness that we see.
The other interpretations allow for the possibility, but MWI seems to argue for it to definitely occur, in some universe branch.
I think it's the "wacky but not TOO wacky" world that I find pretty fascinating in QM. I just haven't seen a description that just seemed to nail it for me. Obviously, YMMV.
Your claim is that MWI predicts things we don't see. If this is true then it is a really big deal- you'd be able to show that MWI was not just falsifiable (which is still a contentious issue) but already falsified. Suffice to say someone would have noticed this.
Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens. Unfortunately, this does suggest you have to give up your notion of robust, metaphysical causation since (1) shit ain't determined and (2) there are no objects (the usual units of causation) just overlapping fields. There are some efforts to maintain serious causal stories despite this but since no one really knew what was meant by causation before quantum mechanics this doesn't seem like that big a loss.
In any case, these sacrifices are purely philosophical, MWI changes nothing about what experiences you should expect (except possibly in regards to anthropic issues) and makes no new predictions about run of the mill everyday physics.
Hi Jack,
[i]Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens.[/i]
This all could just be an issue of me being massively off on the probabilities, but aren't there a greater number of possibilities that my next words will be not be in English than in English, and therefore a greater probability that what I would say would not be in English? And in this particular example, there are a number of universes that have branched off that I would have spoken Klingon. I'm not understanding the limitation that would demonstrate that there are more universes where I spoke English instead (i.e. why would there be a bell curve distribution with English sentences being the most frequently demonstrated average?)
And I do want to more clearly re-iterate that I'm not talking about Everett's formal proof, but the purely philosophical ramifications you mention (and also, I haven't got some earth shattering thesis waiting in the wings, I'm just describing my confusion). QM is fact, and MWI is a way of interpreting it. For whatever reason, I'm interested in that interpretation. So chalk it up to me thinking through a dumb question. I don't believe I've falsified a mainstream QM theory. I do feel I've demonstrated to my satisfaction that I don't fully understand the metaphysical implications of MWI. It sounds easier to just chalk it up to "it's the equations", but I do find the potential implications interesting.
Not all "possibilities", as you describe them, are equally likely. If I enter 2+2 into my calculator, and MWI is correct, there would be some worlds in which some transistors don't behave normally (because of thermal noise, cosmic rays, or whatever), bits flip themselves, and the calculator ends up displaying some number that isn't "4". The calculator can display lots of different numbers, and 4 is only one of them, but in order for any other number to appear, something weird had to have happened - and by weird, I mean "eggs unscrambling themselves" kind of weird. (Transistors are much smaller than chicken eggs, so flipped bits in a calculator are more like a microscopic egg unscrambling itself, but you get the idea.)
MWI basically says that, yes, someone will win the quantum lottery, but it won't be you.
This and the other probability discussions above have greatly helped me to understand what MWI was getting at. I wasn't fully grasping what the limitations were, that MWI wasn't describing limitless possibilities happening infinitely.
No. So QM says that at time t every sub atomic particle in your brain has a superposition- a field which gives the possibility that that particle will be found at that location in the field. There is no end to the field but only a very small area will have a non-insignificant probability magnitude. Now scale up to the atomic level. Atoms will similarly have superpositions- these superpositions will be dictated by the superpositions of the subatomic particles which make up the atom. You can keep scaling up. The larger the scale the lower the chances of anything crazy happening is because for an entire atom to be discovered on the other side of the room every particle it is made up of would have to have tunneled ten feet at the same time to the same place. This is true for molecules that make up the entire brain mass. Whatever molecular/brain structural conditions that make you an English speaker at time t are very likely to remain in place at time t2 since their superposition is just a composite of the superpositions of their parts (well not really, my understanding is that it is way more complicated than that, suffice to say that the chances of many particles being discovered away from the peak of their wavefunction is much lower than the chance of finding a single electron outside the peak of its wavefunction).
For our purposes many worlds just says all of the possible outcomes happen. The chances you should assign to experiencing any one of these possibilities are just the chances you should assign to finding yourself in the world in which that possibility happens. Since in nearly all Everett branches you will still be speaking English (nearly all of the particles will have remained in approximately the same place) you should predict that you will never experience un mundo donde personas hablan espanol sin razones!
Heh. Right now, I'm pretty sure the QM does preclude robust, folk understandings of causation. But tell me, what is it that causation gives you that you want so badly?
Thanks, again, this is the type of explanation that helps me to much better understand the possibilities MWI was addressing. And causation just gives me the reasonable expectation that physics models, biology theories, do adequately model our world without worrying about spooking action throwing too big of a monkey wrench into things.
Sure. And don't worry about causation, you can inference and make predictions just fine without it.
I don't quite understand what you're confused about. Why would MWI make you start talking in anything but english?
If you flip a hypothetical fair random coin 1000 times, you'll almost certainly get something around 500 heads and 500 tails. Getting anything like 995 heads would be rare.
The coin can be entirely nondeterministic in how it flips, and still be reliable in this regard.
Well, there's no physical limitation against me speaking something other than my birth language. Using the coin analogy, my tongue position, lips position, and airflow out of my throat are the variables. Those variables, across all distributions, can produce any human word. Across infinity, there will be worlds where I'm speaking my birth language, and other ones that I'm not, for my next statement. MWI seems to me to eliminate the prior state from having an influence on the next state of my language machine. If all probabilities do occur in the MWI, I see the probability of me continuing to speak English to be the 995 heads case (which is still possible, I just see it as unlikely). I don't think MWI "makes" me do anything, I just think the implication is that all possible worlds become reality. It really comes down to the prior state's apparent lack influence; that's what confuses me. Once that's gone, I just see causality in human actions going out the window.
You're confused about probability, causality in QM, and anthropics. (Note in particular that your objection can't be particular to MWI, since even in a collapse theory, the wacky things could happen).
The current state of your brain corresponds to a particular (small neighborhood of) configurations, and most of the wavefunction-mass that is in this neighborhood flows to a relatively small subset of configurations (i.e. ones where your next sentence is in English, or gibberish, rather than in perfect Klingon); this, precisely, is what causality actually means.
Yes, there is some probability that quantum fluctuations will cause your throat cells to enunciate a Klingon speech, without being prompted by a patterned command from your brain. But that probability is on the order of 10^{-100} at most.
And there is some probability, given the structure of your brain, that your nerves would send precisely the commands to make that happen; but given that you don't actually know the Klingon speech, that probability too is on the order of 10^{-100}.
The upshot of MWI in this regard is that very few of your future selves will see wacky incredibly-improbably-ordered events happen, and so you recover your intuition that you will not, in fact, see wacky things. It's just that an infinitesimal fraction of your future selves will be surprised.
Thanks, this really helps to clarify the picture for me.
This is a confusion about free will, not many-worlds.
I would describe my view on the free will question as basically being Dennett's in Elbow Room and Freedom Evolves. But that seems to be confounded by what I expect to be the utter randomness that would emerge from the MWI. I don't worry about having free will; I am concerned about having some sort of causal chain in my actions. I don't disavow that I'm confused, I just don't think I'm confused over free will.
There is deep similarity, that I expected to carry over: in both cases, you have some subjective feeling, and in both cases the nature of physical substrate in which you exist doesn't matter the slightest for the explanation of why you have that feeling. The feeling has a cognitive explanation that screens off physical explanation. Thus, you can be confused about physical explanation, but not confused about your question, since you have a cognitive explanation.
I'm not sure I quite follow. So I have the feeling of confusion, which I attribute to not understanding the ramifications of the physical explanation of quantum effects that the MWI provides. What's the cognitive explanation for this?
I would like to throw out some suggested reading: John Barnes's Thousand Cultures and Meme Wars series. The former deals with the social consequences of smarter-than-human AI, uploading, and what sorts of pills we ought to want to take. The latter deals with nonhuman, non-friendly FOOMs. Both are very good, smart science fiction quite apart from having themes often discussed here.
I have read "A Million Open Doors" and "A World Made of Glass" and don't remember ANY AI at all in them. And only limited uploading. And are there any Meme Wars novels other than "Kaleidoscope Century" and "Candle"? They were decent but not great stories, but the "memetic virus" background required a serious "suspension of disbelief". Barnes's least unrealistic uploading and FOOM novel was the space-farers in "Mother of Storms".
Thousand Cultures: The technology develops through the series. In "The Merchants of Souls" the uploading is the main McGuffin, and in "The Armies of Memory" the AIs are, with the uploads as a good second.
Meme Wars: You are missing "Orbital Resonance" and "The Sky so Big and Black", although I the memes as such are background, not the main story element, in both.
Open threads should not be promoted, because.
Promoted articles as they are also serve a purpose: they screen low-value articles from a "feed for a busy reader". What you describe is also a good suggestion, but instead of redefining "promoted", a better way to implement it is to add a subcategory of promoted self-sufficient entry-level articles, and place them on the front page.
So, I'm reading A Fire Upon The Deep. It features books that instruct you how to speedrun your technological progress all the way from sticks and stones to interstellar space flight. Does anything like that exist in reality? If not, it's high time we start a project to make one.
Edit (10 October 2009): This is encouraging.
So, does anyone want to write out a very preliminary table of contents? Other ideas about how such a book would be organized?
You have to handle two issues first:
You also need to choose a catchy title. I recommend From Sticks and Stones to Atom Bombs: How to Build Your Own World-Destroying Civilization In Only 30 Days!!!
Right, I'm thinking the first chapter will have to teach numbers, logical connectors and a basic English vocabulary. Additional vocabulary can be added throughout the book. We'll just have to hope that the reader can understand more or less universal symbols like arrows to point directions, circles to indicate groupings, proximity indicates labels etc. Also, a section on anatomy will be less helpful the more they've mutated.
I think arithmetic can probably be taught with reference to dots. So:
"* * * *" = * * * *
"* * * *"= 4
"* * + * *" = 4 etc.
Geometry shouldn't be a problem either. The whole thing would have to be heavily illustrated anyway.
Maybe the first couple pages should just depict really happy people using technology paired with stone agers looking miserable.
This, along with building an AI that can self-improve by reading instructional material intended for humans, was a cherished childhood fantasy of mine.
Now, I implement machine learning algorithms to be used in dumb statistical NLP systems.
There's a time-traveler's cheat sheet that covers a lot of the basics. (Credit goes to Ryan North. )
TAKE THE CREDIT
I'm going to go back in time and take credit for that cheat sheet.
http://www.amazon.com/Caveman-Chemist-Circumstances-Achievements-Publication/dp/0841217874
http://www.amazon.com/Caveman-Chemistry-Projects-Creation-Production/dp/1581125666/ref=pd_bxgy_b_img_b
This reminds me of an episode of Mythbusters where the crew set up a bunch of of MacGyver puzzles for the two hosts - pick a lock with a lightbulb filament, develop film with common household chemicals, and signal a helicopter with a tent and camping supplies.
In all seriousness though, Philisophical Materialism and the Scientific Method are probably the most important things; three years ago I bought my first car for a pack of cigarettes, and a $20 Hayes manual. At the time I didn't even know what an alternator was; three months later I'd diagnosed a major electrical problem, and performed an engine swap. The manual helped (obviously), but for the most part it was the knowledge that any mechanical device could be reduced to simple causal patterns which allowed me to do this (incidentally, this is a hobby that I strongly recommend to other LW members - you get to put the scientific method into practice in a hands-on manner, and at the end of it you get a car which is slightly less crappy).
I tend to think that the mere knowledge that flying machines are possible will allow the survivors of WWIII to redevelop the prewar tech within a century.
Could you explain Philosophical Materialism and the Scientific Method without first having the read do science? I agree that these might be the most important things but it isn't clear to me how they can be explained to a civilization that lacks a general scientific vocabulary or the context to interpret things like falsifiability, hypotheses, ontological fundamental mental entities, etc. Does the most important lesson have to be toward the end of the book?
I tried this with one of my first cars back in the early 90s. It turns out that there are a very large number of things that can go wrong with essentially every step of repairing a car, and I didn't have the money or time to continue replacing parts I'd destroyed or troubleshooting problems I'd caused while trying to fix another problem.
I like programming because it has the same features of tracking down problems, but almost entirely without the autocommit feature of physical reality, as long as you choose to back up and test.
Also, even in the 90s, a computer was far cheaper than a good set of tools.
Does the same principle apply to motorcycle maintenance? :-)
A book I was reading that suggested doing your own minor auto repairs, warned strongly against doing motorcycle repairs for anything after the late 1970s. He claimed that newer cycles were so tightly integrated and the tools for working on them so specialized, that you were too likely to get something taken apart that you literally could not reassemble.
I'd say that's true for modern supersports and superbikes, but a beginner bike like a Kawasaki Ninja EX-250 has very little in the way of electrics or other tightly-integrated mechanisms. Just as an anecdote: I do regular maintenance on my 2006 SV-650/S, but anything more complicated than oil changes on my 1972 Honda CB350 is done by a mechanic. While newer bikes have complicated parts like ECUs and fuel injection, those are usually the most reliable parts. Repairing older motorcycles typically involves scrounging e-bay for parts that are no longer manufactured.
The thing I like most about motorcycles is that they are simple, so it's pretty easy to diagnose any problems. It only takes a minute to tell if you're running lean or rich. Simply starting, hearing, and smelling an engine can tell you whether you just need new piston rings or if you've damaged the crankshaft journal bearings.
If you really want the most mechanically simple vehicle, I'd suggest an old scooter such as a Honda Cub. The set of failure modes for an air-cooled single-cylinder engine is quite small.
What for? There aren't any stick-and-stones cultures around.
Do you assign significant probability to the need for such a book in humanity's future? I don't. It would require that:
But also that:
Actually, all you would need for serious problems is that none of the relatively few people who know the essential details of a critical piece of support technology don't survive. Or at least don't survive in your group or that you otherwise have access to. And since if that happens, and you can't know ahead of time what bits of information you might lose, having references to everything possible only makes good sense. Especially given how relatively inexpensive references are now. Cheap insurance against an very unlikely result (of course, they can also be helpful day-to-day too).
There's a mixup of two different scenarios here.
What you seem to be talking about is a group of people a few years to a few decades post collapse, who want to operate or rebuild preexisting tech and need a reference work. If they had a copy of wikipedia plus a good technical & reference library, it would probably answer most of their needs. A special book isn't essential.
What I was talking about is a group of people completely lacking pre-collapse knowledge and experience. You can't give them instructions for building a radio because they tend to ask questions like "what's a screwdriver?" and "how can I avoid being burnt as a witch?" That's what a real stones-and-sticks to high-tech guide book needs to address.
You might think of "my book" as a subset of yours. My book would be more likely to be useful (though hopefully not) and could be expanded to add the material necessary for yours. And your book would be a library in itself, there is no possible way that such a "book" would not span many volumes.
A single long "book" would have high quality cross links, well ordered reading sequences, a uniform style, no internal contradictions, etc. In that sense it's a book as opposed to a library collection.
Just saying "black swan" isn't enough to give higher probability. If you think I can't assign any meaningful probability at all to this scenario, why?
You have to assign probabilities anyway. See the amended article:
That's meaningless. You can't assign a value in dollars to the continued existence of our civilization. Dollars are only useful for pricing things inside that civilization. (Some people argue for using utilons to price the civilization's existence.)
The amount you're willing to pay is a fact about you, not about the book's usefulness. You're saying you estimate its probability of usefulness at 10^-14. But why?
Clearly the market for civilization creation books is efficient.
Nice point. Maybe we should instead talk about scenarios where humanity (including us) no longer suffers aging but a collapse still occurs.
Incidentally, I wonder what the market price for writing a civilization-destroying book might be?
I don't believe anyone can assign meaningful very small or very large probabilities in most situations. It is one of my long-running disagreements with people here and on OB.
There are indeed many known human biases of this kind, plus general inability to predict small differences in probability.
But we can't treat every low probability scenario as being e.g. of p=0.1 or some other constant! What do you suggest then?
I don't know of a unified way of handling extremely small risks, but there are two things that can be helpful. First, as suggested by Marc Stiegler in "David's Sling", is to simply recognize explicitly that they are possible, that way if they do occur you can get on with dealing with the problem without also having to fight disbelief that it could have happened at all. Second, different people have different perspectives and interests and will treat different low possibility events differently, this sort of dispersion of views and preparation will help ensure that someone is at least somewhat prepared. As I said, neither of these is really enough, but I simply can't see any better options.
I see scenarios like the following not impossible.
90% of the human population dies from a plague/meteor along with the knowledge/sufficient numbers to maintain things like power plants, steel mills and the trappings of modern life. Those people that are left with the knowledge have to spend all their time subsistence farming just to survive.
A few generations later when the population has increased a bit and subsistence farming improved in yield due to experience. People want to recreate technology, with just the knowledge passed down by word of mouth.
There's a huge different between having the raw knowledge available and simple step by step instructions.
A book created for this express purpose would be an order of magnitude more useful than any number of encyclopedias or even entire libraries. A big challenge would be even knowing what to research--if you don't have the next technology, you may not even know what it will be.
The biggest obstacle is really distribution. What you'd need its a government, church, or NGO to put a copy in every branch or something.
Maybe you could donate a copy to every prison library. Prisons would actually be a really defensible location to stay post-societal collapse . . .
We can imagine a handbook that is written to be useful for a broad spectrum of possible disastrous situations.
The handbook could be written for post-disaster survivors finding themselves in many possible situations. For example, your first bullet "No technological human societies survive" could be expanded to "(No|Few|Distant|Hostile) technological human societies survive". Indeed, uncertainty about which of the aforementioned possibilities actually hold might be quite probable, given both a civilization-destroying disaster and some survivors.
To some extent, the Long Now's Rosetta project (to build sturdy discs inscribed with examples of many languages) is an example of this sort of handbook.
http://rosettaproject.org/
I agree a knowledge repository would be very useful for survivors right after the disaster. But I don't think any scenario is probable that involves a society with a reasonably stable level of technology and food production existing and profiting from such a book.
BTW, the Rosetta project seems to be purely about describing languages so future people can understand them.
If a few distant technological societies survive, even just one with some reasonable shipping & industry, then I expect they will quickly establish contact with most of the world, if only to exploit natural resources & farming. Most or all tech. economies today rely on many imports of minerals, food, etc. And knowledge and technology would be dispersed quicker with the assistance of this society than by means of such a book.
If a 'hostile' society survives - well, hostile towards whom? Towards all other, non-high-tech survivors? I don't see this as the default attitude of a surviving society that's the most powerful country left on Earth, so without knowing more I hesitate to try to empower whoever they're hostile towards. What did you have in mind here?
Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely - you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you're using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
If we assume that there is "something significantly wrong with our current understanding of the world" but don't know anything more specific, we can't come to any useful conclusions. There's a huge number of things we could do that we think aren't likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it's something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I'm going back to working on my cool, neat, fun, non-humanity-saving project :-)
http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php ?
A lot of stick and stones civilizations that can read, are there?
Agree that it is a cool idea though, does Vinge give more details?
It strikes me that the most crucial aspects of such a book would probably be mechanical engineering (wheels, mills, ship construction, levers and pullies) and chemical identification (where to find and how to identify loadstones, peat, saltpeater, tungsten) things no one here is going to have much experience with.
What I'd like to know is what the ideal order of scientific discoveries would be. Like what would have been possible earlier in retrospect, what later inventions could have been invented earlier and sped up subsequent innovation the most. Could you teach a sticks and stones civilization calculus? What is the earliest you could build a computer? Many countries went skipped building phone infrastructure and have gone straight to cellular. What technologies were necessary intermediate steps and which could be skipped?
Any hypotheses for these questions?
In the book it's chemicals (gunpowder) and radios. The application of radios by Vinge's version of non-anthropomorphic intelligences is especially interesting.
What about a "Mote In God's Eye" -style technology bunker? Would having a set of raw materials, instructions, and tomes of information be the ideal setup? Perhaps something along the lines of the Svalbard Seed Vault. What are the most useful artifacts that can survive A) the catastrophe and B) the length of time it takes for the artifacts to be recovered? Such a timeframe could be short or many, many generations long (even geologic time?). Do we want this to potentially survive until the next intelligent being evolves, in the case of total destruction of mankind? What sealing mechanism would still be noticeable and breach-able by a low-tech civilization?
Or do we want to assume there is NO remaining technology and we're attempting to bootstrap from pure knowledge? Either way, I think it would be an interesting problem to solve.
Basic electrics are possible as soon as you have decent metalworking. Dynamos are just a bunch of spools of copper wires and magnets. Add some graphite, and you have telephones. ̶G̶r̶e̶e̶k̶s̶ ̶c̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶m̶a̶d̶e̶ ̶t̶h̶e̶m̶.
A printing press should be easier to make...
Wire making is easy if you have copper. The real problem is insulating the wire, especially with something flexible enough for winding coils. This is part of the problem of infrastructure - and very few people know enough to really even start working on a serious rebuilding problem, for example after a dinosaur killer impact. I know more than anyone else I have ever met, especially in the areas of food (agriculture and cooking) and shelter (designing, concrete, masonry, carpentry, plumbing, wiring, etc), and even I barely know enough to get started. For example, I don't know of any way to make insulation for wires without an already existing chemical industry, except natural rubber, which would most likely not be available.
I've seen cloth-wrapped appliance cords - never tried it, but it might be feasible.
If you look closer, you'll probably find only the outer protective layers are cloth; I've seen that on a lot of older wiring, but the ones I have seen all had a thin inner layer of rubber right on the copper. Tarred cloth probably would work, as long as the voltages were low enough and there were multiple layers; paper might be even better though. (Most of the old wiring I have seen was a thin layer of rubber next to the copper, then paper wrapping protecting the rubber and separating the individual wires, and the whole bundle protected with fabric.)
Wind each layer sparsely so that wires don't touch and pack insulator (dried leaves) between the layers. Makes for a woefully inefficient spool, but still.
Gotta try this out with scrap metal.
That would probably work, the only problem with it is that you would have to know in advance what you were doing, this isn't something that would be tried by an experimenter trying to figure things out for example.
Well, did the Greeks have the ability to make decent enough wire in sufficient quantities?
I think they could. Remember the Antikythera mechanism's high quality of fabrication. And fine metal wire was useful for jewelry and art:
I don't know, but even if they could do it, they had no reason to. So we can't really tell.
The real question is - if they really really wanted to and had a book of helpful tips, could they have made decent enough wire? (And could they get copper in sufficient quantities? By Roman times they certainly could.)
Could they make it thin enough (even with insulation) to be able to fit large amounts of windings?
ie, assuming they had reason to try, could they do it based on what we know of their capabilities at the time?
Incidentally, a radio would be much cheaper to make and almost certainly within their capabilities.
Not yet.
Is the likelihood that future sticks and stones civilizations will know how to read such that the first chapter doesn't need to be teaching them how to read the rest of the book? It seems to me that the probability a collapsed civilization is mostly illiterate is high enough to justify some kind of lexical key.
I plan to develop this into a top level post, and it expands on my ideas in this comment, this comment, and the end of this comment. I'm interested in what LWers have to say about it.
Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression. For example Marcus Hutter's AIXI purports to identify the inferences a maximally-intelligent being would make, yet it (and efficient approximations) does not have practical application. The reason (I think) is that it works by finding the shortest hypothesis that fits any data given to it. This means it makes the best inference, on average, over all conceivable worlds it could be placed in. But the No Free Lunch theorems suggest that this means it will be suboptimal compared to any algorithm tailored to any specific world. At the very least, having to be optimal for the all of the random worlds and anti-inductive worlds, should imply poor performance in this world.
The point is that I think "intelligence" can refer to two useful but very distinct attributes: 1) the ability to find the shortest hypothesis fitting the available data, and 2) having beliefs (a prior probability distribution) about one's world that are closest to (have the smallest KL divergence from) that world. (These attributes roughly correspond to what we intuit as "book smarts" and "street smarts" respectively.) A being can "win" if it does well on 2) even if it's not good at 1), since using a prior can be more advantageous than finding short hypothesis since the prior already points you to the right hypothesis.
Making something intelligent means optimizing the combination of each that it has, given your resources. What's more, no one algorithm can be generally optimal for finding the current world's probability distribution, because that would also violate the NFL theorems.
Organisms on earth have high intelligence in the second sense. Over their evolution history they had to make use of whatever regularity they could find about their environment, and the ability to use this regularity became "built in". So the history of evolution is showing the result of one approach to finding the environment's distribution (ETC), and making an intelligent being means improving upon this method, and programming it to "springboard" from that prior with intelligence in the first sense.
Thoughts?