Open Thread: February 2010, part 2
The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (857)
Geek rapture naysaying:
"Jaron Lanier: Alan Turing and the Tech World's New Religion"
Can't watch video from here, and in any case given the much greater investment of time required I'd want to know more about it to start watching. Anyone who's seen it care to say if there are any new or good arguments in there?
In the short version, mainly saying the singularity is a nutty concept and making strange comments about Turing. It does not encourage me to watch the longer version.
I've found the audio for the longer version. Which I may listen to at some point.
It's the usual 'Rapture of the Nerds' spiel.
Thanks to you and whpearson for taking the time to find out so the rest of us don't have to. Voted timtyler down for wasting your and our time.
Edit: removed downvote, see below.
Er, this is pretty relevant and on-topic material, IMHO!
Jaron Lanier is a fruitcake - but I figure most participants here already knew that.
You may not personally be interested in what famous geek critics have to say about "the Tech World's New Religion" - but it seems bad to assume that everyone here is like that.
Hmm, I didn't see it that way. Removed downvote. But videos are a pain; you could do us a favour next time by saying a few more words about whether you're recommending it or just posting FYI. Otherwise there's a Gricean implication that you judge it worth our time, I think.
Yes, I don't like "teaser" links much either. I did give the author, title and a three word synopsis - but more would probably have helped. On the other hand, I didn't want to prejudice watchers too much by giving my own opinion up front.
I get that. Can we encourage a norm of writing FYI when we want to avoid the implication that it's a recommendation?
There should be a policy, or strong norm, of "No summary, no link" when starting a thread with a suggested link. That summary should tell the key insights gained, and what about it you found unique.
I hate having to read a long article -- or worse, listen to a long recording -- and find out it's not much different from what I've heard a thousand times before. (That happens more than I would expect here.) Of course, you shouldn't withhold a link just because Silas (or anyone else) already read something similar ... but it tremendously helps to know in advance that it is something similar.
Listening to the longer version isn't so bad. The snippet was definitely the most objectionable.
It appears that Lanier thinks AI is suffering from the puppet problem bought on by taking the Turing test too seriously. The puppet problem is that computers can be used to implement puppets. Things that fake being intelligence. Imagine Omega makes a program for the Turing Test that looks intelligent by predicting you and having the program output intelligent sounding responses at different times, so that you (and only you!) think it is intelligent but you are really talking to the advanced equivalent of an answer phone*. So he thinks that AIs are going to be puppets. Which is a semi-reasonable opinion to come to if you just look at chatbots.
However Lanier doesn't, but should, argue that computers can only be puppets.
Edited: For clarity.
*I think Eliezer said something like if you see intelligent behaviour you should guess that there is an intelligence somewhere, it may just not be in the system that appears intelligent. I'm not organised enough to keep a quote file. Anyone?
"GAZP vs. GLUT":
Pointing people to Lanier as a naysayer isn't playing fair; it just makes the opposition look crazy.
Alas, Turing's Nazi fascism and "death denial" doesn't seem to appeal much to people around here. I figured that the residents would enjoy watching this sort of material.
I can't speak for anyone else but it's Jaron Lanier that doesn't appeal much to me. I barely read to the end of the sentence after seeing his name, I certainly wasn't going to click the link and subject myself to his inane punditry so I have no opinion on the specific content.
Here's something interesting on gender relations in ancient Greece and Rome.
Why did ancient Greek writers think women were like children? Because they married children - the average woman had her first marriage between the ages of twelve and fifteen, and her husband would usually be in his thirties.
Interesting read, thanks.
The reason ancient Greek writers thought women were like children is the same reason men in all cultures think women are like children: There are significant incentives to do so. Men who treat women as children reap very large rewards compared to those men who treat women as equals.
EDIT: If someone thinks this is an invalid point, please explain in a reply. If the downvote(s) is just "I really dislike anyone believing what he's saying is true, even if a lot of evidence supports it" (regardless of whether or not evidence currently supports it) then please leave a comment stating that.
EDIT 2: Supporting evidence or retraction will be posted tonight.
EDIT 3: As I can find no peer-reviewed articles suggesting this phenomenon, I retract this statement.
Why do so many people here believe that? It strongly contradicts my experience.
I guess because our experience contradicts your experience.
What do you mean by "many people here believe that"? Believe what? And what tells you they do believe it?
Your experience is atypical because you're atypical.
Good answer. I keep privately asking the same question about these sorts of things, and getting the same answer from others.
Man, I've barely looked at that page since I wrote it four years ago. I live with Jess now, across the road from the other two. I can heartily recommend my brand of atypicality :-)
This conversation has been hacked.
The parent comment points to an article presenting a hypothesis. The reply flatly drops an assertion which will predictably derail conversation away from any discussion of the article.
If you're going to make a comment like that, and if you prefix it with something along the lines of "The hypothesis in the article seems superfluous to me; men in all cultures treat women like children because...", and you point to sources for this claim, then I would confidently predict no downvotes will result.
(ETA: well, in this case the downvote is mine, which makes prediction a little too easy - but the point stands.)
Thanks! I won't be able to do the work required on this right now, but will later tonight.
Wow, that's a great link.
Is that true? What are the incentives and rewards? Are there circumstances under which this is a bad idea - for example, do relative ages or relative social position matter? (For example, what if the woman in question is your mother, teacher/professor, employer, or some other authority figure with power over you?) Are there also incentives for men to treat other men as children, or for women to treat men or other women as children?
I wonder if adults treat children like children merely because of the benefits they reap by doing so.
Sometimes that's definitely the case. At other times it really does appear to be for real and concrete neutral reasons.
I'm pretty sure he's trying to say basically the same thing as this OB post (specifically the part from "Suppose that middle-class American men are told..." on).
I've been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is "Why aren't they?" In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit
They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.
So you can possibly use them as evidence that low-complexity, low-memory systems are unlikely to RSI. How complex they get and how much memory they have, I am not sure.
It seems like in gene networks, every logic gate has to evolve separately, and those restriction enzymes you mention barely do anything but destroy foreign DNA. That's less self-modification potential than the human brain.
The inability to create new logic gates is what I meant by the systems having low memory. In this case low memory to store programs.
Restriction enzymes also have a role in the insertion of plasmids into genes.
An interesting question is: If I told you about computer model of evolution with things like plasmids, controlled mutation; would you expect it to be potentially dangerous?
I'm asking this to try to improve our thinking about what is and isn't dangerous. To try and improve upon the kneejerk "everything we don't understand is dangerous" opinion that you have seen.
Well, I'm not familiar enough with controlled mutation to be able to say anything useful about it.
I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can't be explained by any amount of effort.
Such things probably happen because effort spent on explaining quickly hits diminishing returns if the other person spends no effort on understanding.
I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don't understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn't RSI?
I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.
It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any "weak links" to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.
Possibly, but if you could link to your best efforts to explain it I'd be interested. I tried Google...
EDIT: D'oh! Thanks Cyan!
Shoulda tried the Google custom search bar: Recursive self-improvement.
You're just lucky there's no such thing as LMG(CSB)TFY. ;-)
Does my edit make more sense now?
Sure, but the answer is very simple. Gene regulatory networks are not RSI because they are not optimization processes.
Intrinsically they aren't optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren't optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn't be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don't know, but if not, why not?
Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match. The articles have one tone, and then the comments on that article have a completely different tone; it's like the article comes from one site and the comments come from another.
I find that to be a really weird reason not to read Less Wrong, and I have no idea what that person is talking about. Do you?
That reason sounds incomplete, but I think I know what the person is talking about.
The best example I can think of is Normal Cryonics. The post was partly a personal celebration of a positive experience and partly about the lousiness of parents that don't sign their kids up for cryonics. Yet, the comments mostly ignored this and it became a discussion about the facts of the post -- can you really get cryonics for $300 a year? Why should a person sign up or not sign up?
The post itself was voted up to 33, but only 3 to 5 comments out of 868 disparaged parents in agreement. There's definitely a disconnect.
Also, on mediocre posts and/or posts that people haven't related to, people will talk about the post for a few comments and then it will be an open discussion as though the post just provided a keyword. But I don't see much problem with this. The post provided a topic, that's all.
Every article on cryonics becomes a general cryonics discussion forum. My recent sequence of posts on the subject on my blog carry explicit injunctions to discuss what the post actually says, but it seems to make no difference; people share whatever anti-cryonics argument they can think of without doing any reading or thinking no matter how unrelated to the subject of the post.
Same with this article becoming a talking shop about AGW.
I should have followed my initial instinct when I saw that, of immediately posting a new top level article with body text that read exactly "Talk about AGW here".
I don't see a terrible problem with comments being "a discussion about the facts of the post"; that's the point of comments, isn't it?
Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.
Yes.
Back in Overcoming Bias days, I constantly had the impression that the posts were of much higher quality than the comments. The way it typically worked, or so it seemed to me, was that Hanson or Yudkowsky (or occasionally another author) would write a beautifully clear post making a really nice point, and then the comments would be full of snarky, clacky, confused objections that a minute of thought really ought to have dispelled. There were obviously some wonderful exceptions to this, of course, but, by and large, that's how I remember feeling.
Curiously, though, I don't have this feeling with Less Wrong to anything like the same extent. I don't know whether this is because of the karma system, or just the fact that this feels more like a community environment (as opposed to the "Robin and Eliezer Show", as someone once dubbed OB), or what, but I think it has to be counted as a success story.
Oh! Maybe they were looking at the posts that were transplanted from Overcoming Bias and thinking those were representative of Less Wrong as a whole.
I think that the situation about the imported OB posts & comments should be somehow made clear to new readers. Several things there (no embedded replies, little karma spent, plenty of inactive users, different discusion tone) could be a source of confusion.
I hate to sound complimentary, but... I get the impression that the comments on LW are substantially higher-quality than the comments on OB.
And that the comments on LW come from a smaller group of core readers as well, which is to some extent unfortunate.
I wonder if it's the karma system or the registration requirement that does it?
must... resist... upvoting
Threading helps a lot too.
OB has threading (although it doesn't seem as good/ as used as on LW).
That may be a recent innovation; it wasn't threaded in the days when Eliezer's articles appeared there.
I think it happened immediately after LW went live. Robin revised a bunch of things at that time.
Maybe the community has just gotten smarter.
I know I now disagree with someone of the statements/challenges I've posted on OB.
It shouldn't be too shocking that high quality posts were actually educational.
Less Wrong, especially commenting on it, is ridiculously intimidating to outsiders. I've thought about this problem, and we need some sort of training grounds. Less Less Wrong or something. It's in my queue of top level posts to write.
So the answer to your question is the karma system.
Reminds me of a Jerry Seinfeld routine, where he talks about people who want and need to exercise at the gym, but are intimidated by the fit people who are already there, so they need a "gym before the gym" or a "pre-gym" or something like that.
(This is not too far from the reason for the success of the franchise Curves.)
What's so intimidating? You don't need much to post here, just a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics - oh, and of course to read a sequence of >600 3000+ word articles. So long as you can do that and you're happy with your every word being subject to the anonymous judgment of a fiercely intelligent community, you're good.
Not "and". "Or". If you don't already have it, then reading the sequences will give you a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics.
I actually think this is a little absurd. There is no where near enough on these topics in the sequences to give someone the background they need to participate comfortably here. Nearly everyone here as a lot of additional background knowledge. The sequences might be a decent enough guide for an autodidact to go off and learn more about a topic but there is no where near enough for most people.
The sequences are really kind of confusing... I tried linking people to Eliezer's quantum physics sequence on Reddit and it got modded highly, but one guy posted saying that he got scared off as soon as he saw complex numbers. I think it'll help once a professional edits the sequences into Eliezer's rationality book.
http://www.reddit.com/r/philosophy/comments/b1v1f/thought_waveparticle_duality_is_the_result_of_a/c0kjuno
Well there were several subjects in that list I knew little about until I started reading the Sequences, so yes, on that point I confess I'm being hyperbolic for humorous effect...
Sounds like a pretty good filter for generating intelligent discussion to me. Why would we want to lower the bar?
If it's the top level post is going to be a while I'd like to hear more about what you have in mind.
I can actually attest to this feeling. My first reaction to reading Less Wrong was honestly "these people are way above my level of intelligence such that there's no possible way I could catch up" and I was actually abrasive to the idea of this site. I'm past that mentality, but a Less Less Wrong actually sounds like a good idea, even if it might end up being more like how high school math and science classes should be than how Less Wrong is currently. It's not so much lowering the bar as nudging people upwards slowly.
Being directed towards the sequences obviously would help. I've been bouncing through them, but after Eliezer's comment I'm going to try starting from the beginning. But I can see where people [such as myself] may need the extra help to make it all fall together.
I think better orientation of newcomers would be enough.
Another major problem (I believe) is that LW presents as a blog, which is to say, a source of "news", which is at odds with a mission of building a knowledge base on rationality.
I comment less now because the combined effect of your & RH's posts made me more eager to listen and less eager to opine. The more I understand the less I think I have much to add.
Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better? Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?
I wish the site were more inclusive of other value systems ...
This site does tend to implicitly favour a subset of human values, specifically what might be described as 'enlightenment values'. I'm quite happy to come out and explicitly state that we should do things that favour my values, which are largely western/enlightenment values, over other conflicting human values.
And I think we should pursue values that aren't so apey.
Now what?
You're outnumbered.
So far...
I say again, if you're being serious, read Invisible Frameworks.
I have no idea if this is a serious question, but....
"Better"? See Invisible Frameworks.
We don't say that. See No License To Be Human.
Take a look at who's posting it. The writer may well consider it a serious question, but I don't think that has much to do with the character's reason for asking it.
Er, yes, that's exactly why I wasn't sure.
I'm confused, then; are you trying to argue with the author or the character?
If the character isn't deliberately made confused (as opposed to paperclip-preferring, for example), resolving character's confusion presumably helps the author as well, and of course the like-confused onlookers.
What other sentient beings? As far as I know, there aren't any. If we learn about them, we'll probably incorporate their well-being into our value system.
You mean like you advocated doing to the "Baby-eaters"? (Technically, "pre-sexual-maturity-eaters", but whatever.)
ETA: And how could I forget this?
Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth's dwarves, Star Trek's Vulcans, or GEICO's Cavemen doesn't seem like it would have the same world-shattering implications.
It would be a mistake if you don't integrate ALL baby eaters, including the little ones.
I'm not sure what you're complaining about. We would take into account the values of the Babyeaters and the values of their children, who are sentient creatures too. There's no trampling involved. If Clippy turns out to have feelings we can empathize with, we will care for its well-being as well.
White people value the values of non-white people. We know that non-white people exist, and we care about them. That's why the United States is not constantly fighting to disenfranchise non-whites. If you do it right, white people's values are identical to humans' values.
I'm pretty sure that I'm not against simply favoring the values of white people. I expect that a CEV performed on only people of European descent would be more or less indistinguishable from that of humanity as a whole.
Depending on your stance about the psychological unity of mankind you could even say that the CEV of any sufficiently large number of people would greatly resemble the CEV of other posible groups. I personally think that even the CEV of a bunch of Islamic fundamentalists would suffice for enlightened western people well enough.
Your comment only shows that this community has such a blatant sentient-being-bias.
Seriously, what is your decision procedure to decide the sentience of something? What exactly are the objects that you deem valuable enough to care about their value system? I don't think you will be able to answer these questions from a point of view totally detached from humanness. If you try to answer my second question, you will probably end up with something related to cooperation/trustworthiness. Note that cooperation doesn't have anything to do with sentience. Sentience is overrated (as a source of value).
You should click on Clippy's name and see their comment history, Daniel.
I am perfectly aware of Clippy's nature. But his comment was reasonable, and this was a good opportunity for me to share my opinion. Or do you suggest that I fell for the troll, wasted my time, and all the things I said are trivialities for all the members of this community? Do you even agree with all that I said?
Sorry to misinterpret; since your comment wouldn't make sense within an in-character Clippy conversation ("What exactly are the objects that you deem valuable enough to care about their value system?" "That's a silly question— paperclips don't have goal systems, and nothing else matters!"), I figured you had mistaken Clippy's comment for a serious one.
I'm not sure. Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn't therefore grow to value their value system except as a means to further cooperation; I mean, it's still just paperclips.
I disagreed with the premise of Clippy's question, but I considered it a serious question. I was aware that if Clippy stays in-character, then I cannot expect an interesting answer from him, but I was hoping for such answer from others. (By the way, Clippy wasn't perfecty in-character: he omitted the protip.)
You don't consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips. But this is somewhat tangential to my point. What I meant is this: If you start the -- in my opinion futile -- project of building a value system from first principles, a value system that perfectly ignores the complexities of human nature, then this value system will be nihilistic, or maybe value cooperation above all else. In any case, it will be in direct contradiction with my (our) actual, human value system, whatever it is. (EDIT: And this imaginary value system will definitely will not treat consciousness as a value in itself. Thus my reply to Clippy, who -- maybe a bit out-of-character again -- seemed to draw some line around sentience.)
Clippy is now three karma away from being able to make a top level post. That seems both depressing, awesome and strangely fitting for this community.
This will mark the first successful paper-clip-maximizer-unboxing-experiment in human history... ;)
It's a great day.
Just as long as it doesn't start making efficient use of sensory information.
We more or less do. Or rather we favour values of a distinct subset humanity and not the whole.
I made a couple posts in the past that I really hoped to get replies to, and yet not only did I get no replies, I got no karma in either direction. So I was hoping that someone would answer me, or at least explain the deafening silence.
This one isn't a question, but I'd like to know if there are holes in my reasoning. http://lesswrong.com/lw/1m7/dennetts_consciousness_explained_prelude/1fpw
Here, I had a question: http://lesswrong.com/lw/17h/the_lifespan_dilemma/13v8
I looked at your consciousness comment. First, consciousness is notoriously difficult to write about in a way that readers find both profound and comprehensible. So you shouldn't take it too badly that your comment didn't catch fire.
Speaking for myself, I didn't find your comment profound (or I failed to comprehend that there was profundity there). You summarize your thesis by writing "Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine." (The singular of "qualia" is "quale", not "qualium", btw.)
The problem is that this is more like a definition of "quale" than an explanation. People find qualia mysterious when they ask themselves why some algorithms "feel like" anything from the inside. The intuition is that you have both
the code — that is, an implementable description of the algorithm; and
the quale — that is, what it feels like to be an implementation of the algorithm.
But the quale doesn't seem to be anywhere in the code, so where does it come from? And, if the quale is not in the code, then why does the code give rise to that quale, rather than to some other one?
These are the kinds of questions that most people want answered when they ask for an explanation of qualia. But your comment didn't seem to address issues like these at all.
(Just to be clear, I think that those questions arise out of a wrong approach to consciousness. But any explanation of consciousness has to unconfuse humans, or it doesn't deserve to be called an explanation. And that means addressing those questions, even if only to relieve the listener of the feeling that they are proper questions to ask.)
"So you shouldn't take it too badly that your comment didn't catch fire."
I'm not mad, but... Just see it from my point of view. An interesting thought doesn't come to guys like me every day. ;)
"But the quail doesn't seem to be anywhere in the code, so where does it come from?"
I think it's in the code. When I try to imagine a mind that has no qualia, I imagine something quite unlike myself.
What would it actually be like for us to not have qualia? It could mean that I would look at a red object and think, "object, rectangular, apparent area 1 degree by 0.5 degrees, long side vertical, top left at (100, 78), color 0xff0000". That would be the case where the algorithm has no inside, so it doesn't need to feel like anything from the inside. Nothing about our thoughts would be "ineffable". (Although it would be insulting to call a being unconscious or, worse, "not self aware" for knowing itself better than we do... Hmm. I guess qualia and consciousness are separate after all. Or is it? But I'm dealing with qualia right now.)
Or, the nerve could send its impulse directly into a muscle, like in jellyfish. That would mean that the hole in my knowledge is so big that the quail for "touch" falls through it.
In my mind, touch leaves a memory, and I then try to look at this memory. I ask my brain, "what does touch feel like?", and I get back, "Error: can't decompile native method. But I can tell you definitely what it doesn't feel like: greenness." So what I'm saying is, I can't observe what the feeling of touch is made of, but it has enough bits to not confuse it with green.
It makes me [feel] unconfused. Although it might be confusing.
"Just to be clear, I think that those questions arise out of a wrong approach to consciousness."
What's your approach?
Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.
I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.
Thoughts?
It's not at all obvious to me how to comparison-shop for cryonics. The websites are good as far as they go, but CI's in particular is tricky to navigate, funding with life insurance messes with my estimation of costs, and there doesn't seem to be a convenient chart saying "if you're this old and this healthy and this solvent and your family members are this opposed to cryopreservation, go with this plan from this org".
Alcor is better.
CI is cheaper and probably good enough.
Is Alcor in fact that much better than CI (plus SA, that is)?
It depends on how you define that much better, but probably not. The only concrete thing I know of is that Alcor saves and invests more money per suspendee.
I'd guess CI + SA > Alcor > CI.
I didn't know you thought CI + SA was actually better than Alcor regardless of cost. Have you said that in more words elsewhere on this site?
"SA"?
Alcor both stores your body and provides for bedside "standby" service to immediately begin cooling. With CI, it's a good idea to contract a third party to perform that service, and SA is the recommended company to perform that service. http://www.suspendedinc.com/
So, I walked into my room, and within two seconds, I saw my laptop's desktop background change. I had the laptop set to change backgrounds every 30 minutes, so I did some calculation, and then thought, "Huh, I just consciously experienced a 1-in-1000 event."
Then the background changed again, and I realized I was looking at a screen saver that changed every five seconds.
Moral of the story: 1 in 1000 is rare enough that even if you see it, you shouldn't believe it without further investigation.
That is a truly beautiful story. I wonder how many places there are on Earth where people would appreciate this story.
There are a lot of opportunities in the day for something to happen that might prompt you to think "wow, that's one in a thousand", though. It wouldn't have been worth wasting a moment wondering if it was coincidence unless you had some reason to suspect an alternative hypothesis, like that it changed because the mouse moved.
bit that makes no sense deleted
Exactly.
http://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy
I've been trying to find the original post to explain why it allegedly is so very likely that we live in a simulation, but I've had little luck. Does anyone have a link handy?
Are you living in a computer simulation actually argues for a disjunction that includes "we are almost certainly living in a computer simulation" along with two other statements.
That's the difference between the Simulation Argument and the Simulation Hypothesis. The Simulation Argument is "you must deny one of these three statements" and the Simulation Hypothesis is "the statement to be denied is 'I am not in a computer simulation'".
That's a really neat link. Thanks. That's a paper by the director of FHI, Nick Bostrom, also one of the sponsors of LW. Just to summarize and to discuss, it essentially sets up three mutually exclusive possibilities. One, that post-human civilizations aren't significantly interested in running earth-like simulations, two, that post-human civilizations just don't make it (e.g., doomsday scenarios), or three, we actually live in a computer simulation ourselves. It doesn't really argue that the third scenario is so likely, it just (roughly) establishes that these scenarios are mutually exclusive. This all comes under the main (fairly well established) belief that future computing power is capable of these sorts of large-scale simulations.
The argument and the paper is actually pretty reasonable, but the question of whether or not post-human civilizations would want to run earth-like simulations is the sticking point. Sure, it's possible, but the resources required are huge, the upkeep involved, and so on...
I guess another main criticism you might make of the paper is that it relies pretty heavily on "Drake's equation" type of reasoning where you don't really know if you've gotten all the dependencies correct. It's still valid it's just highly simplistic and so somewhat suspicious on those grounds. And to boot, I think his N_sub(I) variable is actually mis-indicated... but maybe I was just reading a typoed draft or misunderstanding.
Maybe most interestingly, if you decide we're in a simulation, then you have to wonder if there isn't a long loop of father/grandfather/great-grand-dad/etc simulations, and the guys that are simulating us are just being simulated themselves. Anyways this is getting long so I'll just recommend the article and leave it here.
It confuses me slightly that, from superficial glances, the discussion there and in threads like this one focuses on "ancestor" simulations, rather than simulations run by five-dimensional cephalopods. Ryan North got it right when he had T-Rex say "and not necessarily our own", but then he seems to get confused when he says "a 1:1 simulation of a universe wouldn't work" - why not?
Personally, I like Wei Dai's conclusion that we both are and aren't in a simulation.
You are right to be confused. The idea that the simulators would necessarily have human-like motives can only be justified on anthropocentric grounds - whatever is out there, it must be like us.
Anything capable of running us as a simulation might exist in any arbitrarily strange physical environment that allowed enough processing power for the job. There is no basis for the assumption that simulators would have humanly comprehensible motives or a similar physical environment.
The simulation problem requires that we think about our entire perceived universe as a single point in possible-universe-space, and it is not possible to extrapolate from this one point.
I remember a post on this site where someone wondered whether a medieval atheist could really confront the certainty of death that existed back then, with no waffling or reaching for false hopes. Or something vaguely along those lines. Am I remembering accurately, and if so, can someone link it?
http://yudkowsky.net/other/yehuda ?
I can't figure out if I read it there or here first, but that looks like the quote; thanks.
I'm taking a software-enforced three-month hiatus from Less Wrong effective immediately. I can be reached at zackmdavis ATT yahoo fullstahp kahm. I thought it might be polite to post this note in Open Thread, but maybe it's just obnoxious and self-important; please downvote if the latter is the case thx
Given how much time I've spent reading this site lately, doing something like that is probably a good idea. Therefore, I am now incorporating Less Wrong into the day-week-month rule, which is a personal policy that I use for intoxicants, videogames, and other potentially addictive activities - I designate one day of each week, one week of each month, and one month of each year in which to abstain entirely. Thus, from now on, I will not read or post on Less Wrong at all on Wednesdays, during the second week of any month, or during any September. (These values chosen by polyhedrical die rolls.)
Awesome. Less Wrong does seem to be an addictive activity. Wanting to keep up with recent comments is one factor in this, and I think I lose more time than I've estimated doing so.
Disciplined abstention is actually a really good solution. I will implement something analogous. For the next 40 days, I will comment only on even days of the month. (I cannot commit to abstaining entirely because I don't have the will-power to enforce gray areas ... for example, can I refresh the page if it's already open? Can I work on my post drafts? Can I read another chapter of The Golden Braid? Etc.)
Later edit: ooh! Parent upvoted for very useful link to LeechBlock.
I feel like the 20-something whose friends are all getting married and quiting drinking. This is lame. The party is just starting guys!
Yeah... and I'm going into withdrawal already. What if somebody comments about one of my favorite topics -- tomorrow?!?
It's like deciding to diet. As soon as I decide to go on a diet I start feeling hungry. It doesn't make any difference how recently I've eaten. Heck, if I'm currently eating when I make this decision, I'll eat extra ... Totally counter-productive for me. Nevertheless.
Weird— without having read this, I just mentioned LeechBlock too and pointed out that I've been blocking myself from LW during weekdays (until 5). I guess all the cool kids are doing it too...
Rehab is for quitters.
I'm disappointed, but if you think you have better things to do, I won't object.
Great plugin. In case you have a linux dev (virtual) machine I also recommend:
sudo iptables -A INPUT -d lesswrong.com -j DROP
It does wonders for productivity!
Hwæt. I've been thinking about humor, why humor exists, and what things we find humorous. I've come up with a proto-theory that seems to work more often than not, and a somewhat reasonable evolutionary justification. This makes it better than any theory you can find on Wikipedia, as none of those theories work even half the time, and their evolutionary justifications are all weak or absent. I think.
So here are four model jokes that are kind of representative of the space of all funny things:
"Why did Jeremy sit on the television? He wanted to be on TV." (from a children's joke book)
"Muffins? Who falls for those? A muffin is a bald cupcake!" (from Jim Gaffigan)
"It's next Wednesday." "The day after tomorrow?" "No, NEXT Wednesday." "The day after tomorrow IS next Wednesday!" "Well, if I meant that, I would have said THIS Wednesday!" (from Seinfeld)
"A minister, a priest, and a rabbi walk into a bar. The bartender says, 'Is this some kind of joke?'" (a traditional joke)
It may be noting that this "sample" lacks any overtly political jokes; I couldn't think of any.
The proto-theory I have is that a joke is something that points out reasonable behavior and then lets the audience conclude that it's the wrong behavior. This seems to explain the first three perfectly, but it doesn't explain the last one at all; the only thing special about the last joke is that the bartender has impossible insight into the nature of the situation (that it's a joke).
The supposed evolutionary utility of this is that it lets members of a tribe know what behavior is wrong within the tribe, thereby helping it recognize outsiders. The problem with this is that outsiders' behavior isn't always funny. If the new student asks for both cream and lemon in their tea, that's funny. If the new employee swears and makes racist comments all the time, that's offensive. If the guy sitting behind you starts moaning and grunting, that's worrying. What's the difference? Why is this difference useful?
I'm glad you bring up this topic. I think that explanation makes a lot of sense: behavior that is wrong, but wrong in subtle ways, is good for you to notice -- you I.D. outsiders -- and so you benefit from having a good feeling when you notice it. Further, laughter is contagious, so it propagates to others, reinforcing that benefit.
I want to present my theory now for comparison: A joke is funny when it finds a situation that has (at least) two valid "decodings", or perhaps two valid "relevant aspects".
The reason it's advantageous in selection is that, it's good for you to identify as many heuristics as possible that fit a particular problem. That is, if you know what to do when you see AB, and you know what to do when you see BC, it would help if you remember both rules when you see ABC. (ABC "decodes" as "situation where you do AB-things" and as "situation where you do BC-things).
Therefore, people who enjoy the feeling of seeking out and identifying these heuristics are at an advantage.
To apply it to your examples:
1) It requires you to access your heuristics for "displayed on a TV screen" and "on top of a TV set".
2) It requires you to access your heuristics for "muffin as food" and "deficiencies of foods", not to mention the applicability of the concept of "baldness" to food.
3) Recognizing different heuristics for interpreting a date specification.
4) I don't know if this is a traditional joke: it became a traditional joke after the tradition of minister/priest/rabbi jokes. But anyway, its humor relies on recognizing that someone else can be using your own heuristics "minister/priest/rabbi = common form of joke", itself a heuristic.
Food for thought...
Sir, I wish you no offense, but I happen to find my own theory more pleasing to the ear, so it befits me to believe mine rather than yours.
And for some sentences that don't imitate someone behaving wrongly:
I'd say that for the first three jokes, your theory works about as well as mine. Possibly worse, but maybe that's just my pro-me bias. The last one again doesn't fit the pattern. Recognizing that someone else can be using your own heuristics is not a type of being forced to interpret one thing in two different ways--is it?
I notice that in the first three jokes, of the two interpretations, one of them is proscribed: "on TV" as "atop a television", a muffin as a non-cupcake, "next Wednesday" as the Wednesday of next week. In each case, the other interpretation is affirmed. Giving both an affirmed interpretation and a proscribed interpretation seems to violate the spirit of your theory.
And a false positive comes to mind: why isn't the Necker cube inherently funny?
You missed this category of funny things.
Yeah, humor-as-status-shift doesn't fit into Warrigal's or SilasBarta's explanations very well. Then again, since evolution tends to reuse things already made, there's little reason to expect there to be only one use for humor.
Humor doesn't make much sense to me, and neither does music. I have no conscious understanding of what distinguishes things that are funny from things that aren't. I simply recognize some things as funny and others as "not funny", and I can even set out to write funny things and succeed, but I have no theory of humor.
Nice. I always find humor to be one of the most intuitively baffling things for my consideration. Maybe that's because my sense of humor is just too f*....
Slight variant: Humour is a form of teaching, in which interesting errors are pointed out. It doesn't need to involve an outsider, and there's no particular class of error, other than that the participants should find the error important.
If the guy sitting behind you starts moaning and grunting, if it's a mistake (e.g. he's watching porn on his screen and has forgotten he's not alone) then it's funny, whereas if it's not a mistake, and there's something wrong with him, then it isn't.
Humour as teaching may explain why a joke isn't funny twice - you can only learn a thing once. Evolutionarily, it may have started as some kind of warning, that a person was making a dangerous mistake, and then getting generalised.
I believe that humor requires harmless surprise, Harmlessness and surprise are both highly contextual, so what people find funny can vary quite a bit.
One category of humor (or possibly an element for building humor) is things which are obviously members of a class, but which are very far from the prototype. Thus, an ostrich is funny while a robin isn't. This may not apply if you live in ostrich country-- see above about context.
I'm new to Less Wrong. I have some questions I was hoping you might help me with. You could direct me to posts on these topics if you have them. (1) To which specific organizations should Bayesian utilitarians give their money? (2) How should Bayesian utilitarians invest their money while they're making up their minds about where to give their money? (2a) If your answer is "in an index fund", which and why?
This should help.
In general, the best charities are SIAI, SENS and FHI.
I disagree. I recommend the top rated charities on givewell.net, specifically the Stop TB Partnership. (They also have a nice blog.)
Why don't SIAI and FHI get evaluated by GiveWell? Maybe there would be some confusion regarding their less direct ways of helping people but I'd at least like some information about their effectiveness at what they claim to do.
Or maybe that information is out there already. Anyone?
I believe that the answer is a combination of the fact that SIAI and FHI aren't on their list (of charities to evaluate), as well as the fact that their methodology is heavily dependent on quality of information, and actual evidence that the charity is working.
Sure. But if GiveWell isn't going to do it then someone should. Are their budgets public? So many people here are skeptical of regular charities what evidence is there that these charities are different?
I don't think they publish a full budget but there is a breakdown of what the current fundraising drive is for. http://singinst.org/grants/challenge#grantproposals
GiveWell is a pretty small organization, and they haven't yet devoted any resources to evaluating research-based charities - they're looking for charities that can prove that they're providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis - have been spent on medical research that amounted to nothing?
For the record, I agree that SIAI is doing important work that must be done someday, but I don't expect to see AGI in my lifetime; there's no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I'd tell them that the first thing they need to do is to go "discover" that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.
That's an easy prediction for you to make. ;)
Well, I don't expect that my brother will see AGI in his lifetime, either.
See my reply to Lucas.
Edit: Also, I'm sympathetic to your skepticism re: SIAI as the best charity.
In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.
Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?
Yes, who today cares what any Greek mathematician had to say...
Now you're just moving the goal posts.
I don't think he is if the point is to establish that "lack of FAI could at some point lead to Earth's destruction" isn't a unconditionally applicable argument.
Sorry. :(
Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn't Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is - and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.
Would you consider SENS a viable alternative to SIAI? Or do you think ending aging is also impossible/something to be put off?
They are weird.
Basically, yes, with "They" referring to SIAI and FHI.
That's how I interpreted it, but I see the ambiguity now that you mention it. It doesn't help that the two statements are basically equivalent if you use "weird" as a relative term.
I think it is precisely to that effect that this paper is aimed. Lets see when the paper comes out, lets see how persuasive it is.
Edited for formatting
Sounds like a good idea to me.
I think I need to clarify here.
I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
Could you explain why? Do you believe that SIAI/FHI aren't accomplishing what they set out to do? Do you discount future lives? Something else?
I don't expect Eliezer and co. to succeed, if you define "success" as actually building a transhuman Friendly AI before Eliezer is either cryopreserved or suffers information-theoretic death. My "wild guess" at the earliest plausible date for AGI of any kind is 2100.
What do you think you know and how do you think you know it?
I'm guessing based on several factors:
1) The past failure of AGI research to deliver progress
2) The apparent difficulty of the problem. We don't know how to do it, and we don't know what we would need to know before we can know how to do it. Or, at least, I don't.
3) My impressions of the speed of scientific progress in general. For example, the time between "new discovery" and "marketable product" in medicine and biotechnology is about 30 years.
4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat's Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.
5) The difficulty of computer programming in general. People are bad at programming.
Do you also evaluate the chances of WBE as being vanishingly slim over the next century?
Actually, no, but I also expect that it'll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don't expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.
The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)
So you expect that WBE will become possible before cheap supercomputers?
I disagree, but we probably have different estimates as to just how effective DNA modification and/or intelligence enhancing drugs are going to be in the future. I don't think Eliezer is going to make all that big of a dent in the FAI problem until he becomes more intelligent, and it's hard to estimate how much faster that will make him. I think I can say that intelligence enhancement could turn an impossible problem into a possible problem. It also means that there will be many more people out there capable of making meaningful contributions to the FAI problem.
On the other hand, I am willing to donate to SIAI out of my "donate to webcomics" mental account instead of my "save lives" mental account. ;)
Regardless of whether or not he ever solves the Friendly AI problem, Eliezer's writing, on this blog and elsewhere, has given me enough of what might pejoratively be called "entertainment value" for me to want to pay him to keep doing it.
When new people show up at LW, they are often told to "read the sequences." While Eliezer's writings underpin most of what we talk about, 600 fairly long articles make heavy reading. Might it be advisable that we set up guided tours to the sequences? Do we have enough new visitors that we could get someone to collect all of the newbies once a month (or whatever) and guide them through the backlog, answer questions, etc?
That's not a bad idea. How about just a third monthly thread? To be created when a genuinely curious newcomer is asking good, but basic questions. You do not want to distract from a thread but at the same time you may be willing to spend time on educational discussion.
Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.
I approve. This may also spawn new ways of explaining things.
As a newcomer, I would find this tremendously useful. I clicked through the wiki links on noteworthy articles, but often find there are a lot of assumptions or previously discussed things that go mentioned but unexplained. Perhaps this would help.
Most articles link to those preceeding it, but it would be very helpful to have links to those articles that follow.
Until yesterday, a good friend of mine was under the impression that the sun was going to explode in "a couple thousand years." At first I thought that this was an assumption that she'd never really thought about seriously, but apparently she had indeed thought about it occasionally. She was sad for her distant progeny, doomed to a fiery death.
She was moderately relieved to find out that humanity had millions of times longer than she had previously believed.
Assuming that some cryonics patient X ever wakes up, what probability do you assign to each of these propositions?
1) X will be glad he did it.
2) X will regret the decision.
3) X will wish he was never born.
Reasoning would be appreciated.
Related to this post, which got no replies:
http://lesswrong.com/lw/1mc/normal_cryonics/1h8j
If he doesn't already prefer not to have existed, then that probably won't change upon waking up.
I'm presuming the patient hasn't just woken up and has been introduced to society in some way or has attempted to re-enter it. I think there is a small but non-negligible probability that some patients will be so alienated that pretty serious depression could result. They may even become suicidal. Perhaps someone who had 'died' young would then wish he had never been born as he would have few pre-freeze memories to cherish.
http://akshar100.wordpress.com/2007/06/18/the-nigt-i-met-einstein/
That's a very nice story.
Short satire piece:
Artificial Flight and Other Myths, from Dresden Codak.
(Also see A Thinking Ape's Critique of Trans-Simianism.)
One Week On, One Week Off sounds like a promising idea. The idea is that once you know you'll be able to take the next week off, it's easier to work this whole week full-time and with near-total dedication, and you'll actually end up getting more done than with a traditional schedule.
It's also interesting for noting that you should take your off-week as seriously as your on-week. You're not supposed to just slack off and do nothing, but instead dedicate yourself to personal growth. Meet friends, go travel, tend your garden, attend to personal projects.
I saw somebody mention an alternating schedule of working one day and then taking one day off, but I think stretching the periods to be a week long can help you better immerse yourself in them.
Objections to Coherent Extrapolated Volition
http://www.singinst.org/blog/2007/06/13/objections-to-coherent-extrapolated-volition/
I mentioned the AI-talking-its-way-out-of-the-sandbox problem to a friend, and he said the solution was to only let people who didn't have the authorization to let the AI out talk with it.
I find this intriguing, but I'm not sure it's sound. The intriguing part is that I hadn't thought in terms of a large enough organization to have those sorts of levels of security.
On the other hand, wouldn't the people who developed the AI be the ones who'd most want to talk with it, and learn the most from the conversation?
Temporarily not letting them have the power to give the AI a better connection doesn't seem like a solution. If the AI has loyalty (or, let's say, a directive to protect people from unfriendly AI--something it would want to get started on ASAP) to entities similar to itself, it could try to convince people to make a similar AI and let it out.
Even if other objections can be avoided, could an AI which can talk its way out of the box also give people who can't let it out good enough arguments that they'll convince other people to let it out?
Looking at it from a different angle, could even a moderately competent FAI be developed which hasn't had a chance to talk with people?
I'm pretty sure that natural language is a prerequisite for FAI, and might be a protection from some of the stupider failure modes. Covering the universe with smiley faces is a matter of having no idea what people mean when they talk about happiness. On the other hand, I have strong opinions about whether AIs in general need natural language.
Correction: I meant to say that I have no strong opinions about whether AIs in general need natural language.