The previous open thread has now exceeded 300 comments – new Open Thread posts may be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Open Thread: March 2010, part 3
New Comment
258 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]ata290

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

[-]Rain240

What is the appropriate method to tap out when you don't want to be thrown to the rationality mat any more?

What's the best way for me to stop a thread when I no longer wish to participate, as my emotions are turning sour, and I recognize I will begin saying bad things?

May I suggest "I'm tapping out", perhaps with a link to this very comment? It's a good line (and perhaps one way the dojo metaphor is valuable).

I think in this comment you did fine. Don't sweat it if the comment that signals "I'm stopping here" is downvoted, don't try to avoid it.

In this comment I think you are crossing the "mind reading" line, where you ascribe intent to someone else. Stop before posting those.

-4Rain
I like mind reading. I'm good at it.

Absent statistical evidence drawn from written and dated notes, you should hold it very plausible that your impression you're good at it is due to cognitive bias. Key effects here include hindsight bias, the tendency to remember successes better than failures, the tendency to rewrite your memories after the fact so that you appear to have predicted the outcome, and the tendency to count a prediction as a success - the thousand-and-one-fold effect.

-3Rain
You're good at listing biases. I'm good at creating mental models of other people from limited information. Absent statistical information, you should hold it very plausible that I am under the effect of biases, as I'm certainly not giving you enough data to update to the point where you should consider me good at anticipating people's actions and thoughts. However, obtaining enough written and statistical evidence to allow you to update to the same level of belief that I hold (I would appropriately update as well), is far too difficult considering the time spans between predictions, their nature of requiring my engagement in the moment, etc. My weak evidence is that, having subscribed to sl4 several years ago, following OB and now LW on at least a monthly basis, and having read and incorporated much of what is written here into my own practices, I still have this belief, and feel that it is very unlikely to be a wrong belief. Or perhaps you're overestimating my "good" qualifier and we're closer than we think. At any rate, I apologize for stating a belief that I am unwilling to provide strong evidence to support.
3Paul Crowley
I'd update on you saying "I have good statistical evidence drawn based on written, dated notes" even if you didn't show me the evidence. EDIT: to make this point clearer - I would update more strongly on your assurances if I could think of another likely mechanism than the one I propose by which one could gain confidence in the superiority of one's mind-reading skills.
3Rain
It's not that sort of prediction, I don't think. It's more social and inferential, based on past and current events, and rarely works as well for the future (more than a few hours), though it does to some degree. I don't carry a notebook with me, and oftentimes this is used in a highly social environment, so writing it down would not be appropriate or easy to do. I consider it a form of pattern matching, where I determine the thoughts and feelings of the other person through my knowledge of them and by using real-time interaction, body language, etc. It's rapid correlation of environmental cues and developed mental models. Examples of its use: "What does it mean that they stopped talking? What does that slight glance to the left mean? What does it mean that they used that particular word? Why didn't they take action X? Why did they take action Y, but Z didn't come of it?" I think the phrase "mind reading" is a bit much. Note the original context: "ascribing intent." I'm just using tells that I've learned over time to discern what someone else is thinking or feeling, with my own feeling as to how likely it is that I'm correct (internal, subjective bayesometer?). I've learned to trust it over time because it's been so useful and accurate. Also note that the training period, where I initially develop the mental model, tends to consist of things like asking the other person, "What do you mean?" and then remembering their answer when a similar event comes around again. :-P ETA: I think my pattern matching and memory skills are also what give me my wicked déjà vu. And it's likely more normal people would call this "social skills," though I seem to lack such innate capability.
4Morendil
Indulge in private. :)
3orthonormal
Even if you're good at ascribing intent to others, stating it is likely to do more harm than good. I've tried in the past to give people my analysis of why they're thinking what they're thinking. It inevitably reinforces their resistance rather than lessening it, since agreeing with my analysis would mean publicly acknowledging a character flaw. It's much better to leave them a line of retreat, letting them think of changing their mind in terms of "updating on new evidence" rather than "admitting irrationality". P.S. I'm not responding to the linked example, but to the general practice which I think is counterproductive.
0Rain
I'll provisionally agree that it's not all that useful to tell people what you think of their intent. This is why I linked it as a 'bad thing' for me to say: I considered it a generally combative post, where my intent was to sneer at the other person rather than alter their behavior for the better. I tend to get around this in real world conversations with the use of questions rather than statements, but that requires rapid back and forth. Text forums are just about the worst place to use my most-developed methods of discussion...
5Kevin
I've twice intentionally taken ~48 hours away from this site after I said something stupid. Give it a try. Just leave the conversations hanging; come back days or weeks later if you want. Also, admitting you were wrong goes a long way if you realize you said something that was indeed incorrect, but the rationality police won't come after you if you leave a bad conversation unresolved.
2CannibalSmith
gg
-1kodos96
Just curious: who downvoted this, and why? I found it amusing, and actually a pretty decent suggestion. It bothers me that there seems to be an anti-humor bias here... it's been stated that this is justified in order to keep LW from devolving into a social, rather than intellectual forum, and I guess I can understand that... but I don't understand why a comment which is actually germane to the parent's question, but just happens to also be mildly amusing, should warrant a downvote.
4ata
Did the comment say something other than "gg" before? I'm not among those who downvoted it, but I don't know what it means. (I'd love to know why it's "amusing, and actually a pretty decent suggestion".)
3Matt_Simpson
"good game" It's sort of like an e-handshake for online gaming to acknowledge that you have lost the game - at least in the online mtg community.
1kpreid
In my experience (most of which is a few years old) it is said afterward, but has its literal meaning, i.e. that you enjoyed the game, not necessarily that you lost it.
0Sniffnoy
I think this depends on whether the game is one that's usually played to the end or one where one of the players usually concedes. If it's the latter, "gg" is probably a concession.
0SoullessAutomaton
A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).
0[anonymous]
Or a handshake to start a game. It would stop being funny pretty fast if one would give up given an empty Go board ;-)
0kodos96
Hmmm... I guess I was engaging in mind projection fallacy in assuming everyone got the reference, and the downvote was for disapproving of it, rather than just not getting it.
0prase
I have downvoted it. I had no idea what it meant (before reading the comments). Quick googling doesn't reveal that much.
0Morendil
I thought it was too short and obscure. (On KGS we say that at the start of a game. The normal end-of-game ritual is "thx". Or sometimes storming off without a word after a loss, to be lucid.)
0CannibalSmith
Explaining it would ruin the funnies. Also, Google. Also, inevitably, somebody else did the job for me.
-1Nick_Tarleton
Maybe someone thought it rude to make a humorous reply to a serious and apparently emotionally loaded question.
0RobinZ
There's the stopgap measure of asking for a rain check (e.g. "I'm sorry, but this conversation is just getting me frustrated - I'll come back tomorrow and see if I can come up with a better reply"), but I'm not sure what are the best methods to conclude a conversation. Most of the socially-accepted ones are logically rude.

Are you guys still not tired of trying to shoehorn a reddit into a forum?

2RobinZ
I don't understand the question. What are we doing that you describe this way, and why do you expect us to be tired of it?
4jimmy
There are a lot of open thread posts, which would be better dealt with on a forum rather than on an open thread with a reddit like system.
1RobinZ
You're right, but this isn't supposed to be a forum - I think it's better to make off-topic conversations less convenient. The system seems to work adequately right now.
4nhamann
I suppose you're right in saying that LW isn't supposed to be a forum, but the fact remains that there is a growing trend towards more casual/off-topic/non-rationalism discussion, which seems perfectly fine to me given that we are a community of generally like-minded people. I suspect that it would be preferable to many if LW had better accommodations for these sort of interactions, perhaps something separate from main site so we could cleanly distinguish serious rationalism discussion from off-topic discussion.
0Matt_Duing
Perhaps a monthly or quarterly lounge thread could serve this function, provided it does not become too much of a distraction.
0CronoDAS
Me neither.
1Kevin
I'm tired of it, I'd like to get a real subreddit enabled here as soon as possible.
0Jack
We could just start a forum and stop complaining about it.
[-]Liron120

Startup idea:

We've all been waiting for the next big thing to come after Chatroulette, right? I think live video is going to be huge -- it's a whole new social platform.

So the idea is: Instant Audience. Pay $1, get a live video audience of 10 people for 1 minute. The value prop is attention.

The site probably consists of a big live video feed of the performer, and then 10 little video feeds for the audience. The audience members can't speak unless they're called on by the performer, and they can be "brought up on stage" as well.

For the performer, it's a chance to practice your speech / stand-up comedy routine / song, talk about yourself, ask people questions, lead a discussion, or limitless other possibilities (ok we are probably gonna have to deal with some suicides and jackers off).

For the audience, it's a free live YouTube. It's like going to the theater instead of watching TV, but you can still channel surf. It's a new kind of live entertainment with great audience participation.

Better yet, you can create value by holding some audience members to higher standards of behavior. There can be a reputation system, and maybe you can attend free performances to build up your ... (read more)

4mattnewport
I don't think you should charge a fixed rate per person. An auction or market would be a better way to set pricing, something like Amazon's Mechanical Turk or the Google adwords auctions.
3Kevin
I give it a solid "that could work" but the business operations are non-trivial. You probably would need someone with serious B2B sales experience, ideally already connected with the NYC-area focus group/marketing community.
0JamesAndrix
If you're charging a dollar a group, I don't think a salesperson could pay for themselves. You could probably charge more to anyone who would otherwise have to rent a room/offer incentives/etc. but that would hurt adoption of more casual presenters, which I think you would need to keep your audience.

Interesting article on an Indian rationalist (not quite in the same vein as lesswrong style rationalism but a worthy cause nonetheless). Impressive display of 'putting your money where your mouth is':

Sceptic challenges guru to kill him live on TV

When a famous tantric guru boasted on television that he could kill another man using only his mystical powers, most viewers either gasped in awe or merely nodded unquestioningly. Sanal Edamaruku’s response was different. “Go on then — kill me,” he said.

I also rather liked this response:

When the guru’s initial efforts failed, he accused Mr Edamaruku of praying to gods to protect him. “No, I’m an atheist,” came the response.

H/T Hacker News.

2Jack
As cool as this was there is reason to doubt it's authenticity. There doesn't seem to be any internet record of Pandit Surender Sharma "India's most powerful Tantrik" except for this TV event. Moreover, about a minute in it looks like the tantrik is starting to laugh. Maybe someone who knows the country can tell us if this Pandit Sharma fellow is really a major figure there. I mean, what possible incentive would the guy have for going on TV to be humiliated?
3prase
Perhaps he really believed he could kill the skeptic.
2FAWS
Note: Most of the article is not about the TV confrontation so it's well worth reading even if you already heard about that in 2008.

What would be the simplest credible way for someone to demonstrate that they were smarter than you?

3wedrifid
If they disagree with me and I (eventually?) agree with them, three times in a row. Applies more to questions of logic than questions of knowledge.
0RobinZ
I'm not sure about the "three" or the "applies more to questions of logic than questions of knowledge", but yeah, pretty much. Smarts gets you to better answers faster.
3wedrifid
I'm not sure about the throwaway 'three' either but the 'crystal vs fluid' is something that is true if I am considering "demonstrate to me..." I find that this varies a lot based on personality. What people know doesn't impress me nearly as much as seeing how they respond to new information, including how they update their understanding in response.
2RobinZ
That makes sense. Those two bit are probably fairly good approximations to correct, but I can smell a possibility of better accuracy. (For example: "logic" is probably overspecific, and experience sounds like it should land on the "knowledge" side of the equation but drawing the correct conclusions from experience is an unambiguous sign of intelligence.) I generally agree, I'm merely less-than-confident in the wording.
0wedrifid
Definitely. Ditto. Absolutely. So am I. Improve it for me?
0RobinZ
I would quickly start believing someone was smart if they repeatedly drew conclusions that looked wrong, but which I would later discover are correct. I would believe they were smarter than me if, as a rule, whenever they and I are presented with a problem, they reach important milestones in the solution or dissolution of the problem quicker than I can, even without prior knowledge of the problem. Concrete example: xkcd #356 includes a simple but difficult physics problem. After a long time (tens of minutes) beating my head against it, letting it stew (for months, at least), and beating my head against it again (tens of minutes), I'd gotten as far as getting a wrong answer and the first part of a method. Using nothing but a verbal description of the problem statement from me, my dad pulled out the same method, noting the problem with that method which I had missed finding my wrong answer, within five minutes or so. While driving. (I've made no progress past that insight - rot13: juvpu vf gung lbh pna (gel gb) fbyir sbe gur pheerag svryq sebz n "fbhepr" be "fvax" bs pheerag, naq gura chg n fbhepr-fvax cnve vagb gur argjbex naq nqq Buz'f-ynj ibygntrf gb trg gur erfvfgnapr - since the last time I beat my head against that problem, by the way.)
0wedrifid
Bah. I was hoping your dad gave the actual answer. That's as far as I got too. :)
0RobinZ
He suggested fbyivat n frevrf grez-ol-grez zvtug or arprffnel but I didn't know precisely what he meant or how to do it.
0wnoise
The canonical method is to nggnpu n pheerag qevire gb rirel abqr. Jevgr qbja gur Xvepubss'f ynj ynj rirel abqr va grezf bs gur vawrpgrq pheerag, gur ibygntr ng gung ybpngvba, naq gur ibygntr ng rnpu nqwnprag cbvag. Erjevgr gur nqwnprag ibygntrf va grezf bs genafyngvba bcrengbef, gura qb n (frzv-qvfpergr) Sbhevre genafsbez (gur qbznva vf vagrtref, pbqbznva obhaqrq serdhrapvrf, fb vg'f gur bccbfvgr bs n Sbhevre frevrf), chg va gur pbaqvgvbaf sbe n havg zntavghqr fbhepr naq fvax, naq vaireg vg, juvpu jvyy tvir lbh gur ibygntrf rireljurer. Gur qvssreraprf va ibygntrf orgjrra gur fbhepr naq fvax vf gur erfvfgnapr, orpnhfr gurer vf havg pheerag sybjvat npebff gurz.
0wedrifid
Buggrit. Build a grid of resistors a few meters square and pull out the multimeter.
0wnoise
That works fairly well, as things converge quickly.
0RobinZ
Wait, so if I want to solve it myself, I shouldn't read the text in the great-grandparent of this comment?
2wnoise
Well, yes, that's why I rot13d it. I'll unrot13 the beginning which will provide a clear warning.
2cupholder
I'm not wnoise, but yeah, you probably wouldn't want to read the (now great-)great-grandparent. Put it this way: the first 5 words are 'The canonical method is to.' (I read it anyway 'cuz I'm spoiler resistant. I don't think my math/EE aptitude is enough to carry out the method wnoise gives.)
0RobinZ
Thanks!
0Cyan
I know little about it, but if I knew how to compute equivalent resistances beyond the basics of resistors in parallel and in series, I'd fbyir n ohapu bs rire-ynetre svavgr tevqf, fbeg bhg gur trareny rkcerffvba sbe na A-ol-Z tevq jvgu gur gnetrg abqrf nyjnlf ng gur pragre, naq gura gnxr gur yvzvg nf A naq Z tb gb vasvavgl.
0RobinZ
You can try Xvepubss'f pvephvg ynjf, but at the moment I'm thinking of nffhzvat gung nyy pheeragf sybj njnl sebz zl fbhepr naq qrgrezvavat ybjre naq hccre yvzvgf ba gur pheeragf V arrq onfrq ba gubfr vardhnyvgvrf.
3wedrifid
At this rate I'm going to be proficient at reading rot13 within a week!
0RobinZ
I'm intentionally not reading anything in rot13 and always using the electronic translator, with hopes that I will not become proficient.
0Cyan
The problem doesn't say anything about sources, so I'm not sure what I'm supposed to assume for voltage or current. Can you recommend a good instructional primer? I need something more that Wikipedia's infodump presentation.
0RobinZ
I'm using the term as a metaphor from fluid dynamics - n fbhepr vf n abqr va juvpu pheerag vf vawrpgrq jvgubhg rkcyvpvgyl gnxvat vg bhg naljurer ryfr - orpnhfr gur flfgrz vf vasvavgr, pheerag nqqrq ng n abqr pna sybj bhg gb vasvavgl gb rssrpgviryl pbzcyrgr gur pvephvg, naq orpnhfr gur rdhngvbaf ner flzzrgevp, gur svany fbyhgvba sbe gur pheerag (sebz juvpu Buz'f ynj sbe ibygntr naq ntnva sbe rssrpgvir erfvfgnapr) pna or gubhtug bs nf gur fhz bs n cbfvgvir fbhepr ng bar raqcbvag naq n artngvir fbhepr (n fvax) ng gur bgure. I don't know how this compares to wnoise's canonical method - it might be that this is a less effective path to the solution.
0wnoise
That is definitely part of the method.
0Cyan
I can see why my idea is incompatible with your approach.
1Jack
Mathematical ability seems to be a high sensitivity test for this. I cannot recall ever meeting someone who I concluded was smarter than me who was not also able to solve and understand math problems I cannot. But it seems to have a surprisingly low specificity-- people who are significantly better at math than me (and this includes probably everyone with a degree in a math heavy discipline) are still strangely very stupid. Hypotheses: 1. The people who are better at math than me are actually smarter than me, I'm too dumb to realize it. 2. Intelligence has pretty significant domain variability and I happen to be especially low in mathematical intelligence relative to everything else. 3. My ADHD makes learning math especially hard, perhaps I'm quite good at grasping mathematical concepts but lack the discipline to pick up the procedural knowledge others have. 4. Lots of people of smart people compartmentalize their intelligence, they can't or won't apply it to areas other than math. (Don't know if this differs from #2 except that it makes the math people sound bad instead of me) Ideas?
1RobinZ
The easiest of your hypotheses to examine is 1: can you describe (suitably anonymized, of course) three* of these stupid math wizzes and the evidence you used to infer their stupidity? * I picked "three" because more would be (a) a pain and (b) too many for a comment.
3Jack
Of course the problem is the most memorable examples are also the easiest cases. 1: Dogmatic catholic, knew for a long time without ever witnessing her doing anything clever. 2: As a nuclear physicist I assume this guy is considerably better at math than I am. But this is probably bad evidence as I only know about him because he is so stupid. But there appear to be quite a few scientists and engineers that hold superstitious and irrational beliefs: witness all the credentialed creationists. 3: Instead of just picking someone, take the Less Wrong commentariat. I suspect all but a handful of the regular commenters know more math than I do. I'm not especially smarter than anybody here. Less Wrong definitely isn't dumb. But I don't feel like I'm at the bottom of the barrel either. My sense is that my intellect is roughly comparably to the average Less Wrong commenter even though my math skills aren't. I would say the same about Alicorn, for example. She seems to compare just fine though she's said she doesn't know a lot of math. Obviously this isn't a case of people being good at math and being dumb, but it is a case of people being good at math while not being definitively smarter than I am.
6RobinZ
I suspect that "smarter" has not been defined with sufficient rigor here to make analysis possible.
0jimmy
I'm going with number 2 on this one (possibly a result of doing 4 either 'actively' or 'passively'). I have a very high error rate when doing basic math and am also quite slow (maybe even before accounting for fixing errors). People who's ability to understand math tops out at basic calculus can still beat me on algebra tests. This effect is increased by the fact that due to mathematica and such, I have no reason to store things like the algorithm for doing polynomial long division. It takes more time and errors to rederive it on the spot. At the higher levels of math there were people in my classes who were significantly better at it than I, and at the time it seemed like they were just better than me at math in every way. Another classmate and I (who seem to be relative peers at 'math') would consistently be better at "big picture" stuff, forming analogies to other problems, and just "seeing" (often actually using the visual cortex) the answer where they would just crank through math and come out with the same answer 3 pages of neat handwriting later. As of writing this, the alternative (self serving) hypothesis has come up that maybe those that I saw as really good as math weren't innately better than me (except for having lower error rate and possibly faster) at math, but had just put more effort into it and committed more tricks to memory. This is consistent with the fact that these were the kids that were very studious, though I don't know how much of the variance that explains.
0Mass_Driver
If you can't ever recall meeting someone who you concluded was smarter than you who wasn't good at X, and you didn't use any kind of objective criteria or evaluation system to reach that conclusion, then you're probably (consciously or otherwise) incorporating X into your definition of "smarter." There's a self-promotion trap here -- you have an incentive to act like the things you're good at are the things that really matter, both because (1) that way you can credibly claim that you're at least as smart as most people, and (2) that way you can justify your decision to continue to focus on activities that you're good at, and which you probably enjoy. I think the odds that you have fallen into this self-promotion trap are way higher than the odds for any of your other hypotheses. If you haven't already, you may want to check out the theory of multiple intelligences and the theory of intelligence as information processing
0Hook
It's not really all that simple, and it's domain specific, but having someone take the keyboard while pair programming helped to show me that one person in particular was far smarter than me. I was in a situation where I was just trying to keep up enough to catch the (very) occasional error.
0Morendil
Teach me something I didn't know.
0wedrifid
Really? You're easily impressed. I can't think of one teacher from my first 12 years of education that I am confident is smarter than me. I'd also be surprised if not a single one of the people I have taught was ever smarter than me (and hence mistaken if they apply the criteria you propose). But then, I've already expressed my preference for associating 'smart' with fluid intelligence rather than crystal intelligence. Do you actually mean 'knows more stuff' when you say 'smarter'? (A valid thing to mean FWIW, just different to me.)
0Morendil
They were smarter than you then, in the topic area in which you learned something from them. When you've caught up with them, and you start being able to teach them instead of them teaching you, that's a good hint that you're smarter in that topic area. When you're able to teach many people about many things, you're smart in the sense of being abie to easily apply your insights across multiple domains. The smartest person I can conceive of is the person able to learn by themselves more effectively than anyone else can teach them. To achieve that they must have learned many insights about how to learn, on top of insights about other domains.
2wedrifid
It sounds like you do mean (approximately) 'knows more stuff' when you say 'smarter', with the aforementioned difference in nomenclature and quite probably values to me.
0Morendil
I don't think that's a fair restatement of my expanded observations. It depends on what you mean by "stuff" - I definitely disagree if you substitute "declarative knowledge" for it, and this is what "more stuff" tends to imply. If "stuff" includes all forms of insight as well as declarative knowledge, then I'd more or less agree, with the provision that you must also know the right kind of stuff, that is, have meta-knowledge about when to apply various kinds of insights. I quite like the frame of Eliezer's that "intelligence is efficient cross-domain optimization", but I can't think of a simple test for measuring optimization power. The demand for "the simplest credible way" sounds suspiciously like it's asking for a shortcut to assessing optimization power. I doubt that there is such a shortcut. Lacking such a shortcut, a good proxy, or so it seems to me, is to assess what a person's optimization power has gained them: if they possess knowledge or insights that I don't, that's good evidence that they are good at learning. If they consistently teach me things (if I fail to catch up to them), they're definitely smarter. So each thing they teach me is (probabilistic) evidence that they are smarter. Hence my use of "teach me something" as a unit of evidence for someone being smarter.
0wedrifid
That's reasonable. I don't mean to reframe your position as something silly, rather I say that I do not have a definition of 'smarter' for which the below is true: I agree with what you say here: ..but with a distinct caveat of all else being equal. ie. If I deduce that someone has x amount of more knowledge than me then that can be evidence that they are not smarter than me if their age or position is such that they could be expected to have 2x more knowledge than me. So in the 'my teachers when I was 8' category it would be a mistake (using my definition of 'smarter') to make the conclusion: "They were smarter than you then, in the topic area in which you learned something from them".

Just a thought about the Litany of Tarski - be very careful to recognize that the "not" is a logical negation. If the box contains not-a-diamond your assumption will likely be that it's empty. The frog that jumps out when you open it will surprise you!

The mind falls easily into oppositional pairs of X and opposite-of-X (which isn't the same as the more comprehensive not-X), and once you create categorizations, you'll have a tendency to under-consider outcomes that don't categorize.

Might as well right away move/call attention to the thing about the macroscopic quantum superposition here so we talk about that here.

3bogdanb
I was wondering: Would something like this be expected to have any kind of visible effect? (Their object is at the limit of bare-eye visibility in favorable lighting,* but suppose that they can expand their results by a couple orders of magnitude.) From “first principles” I’d expect that the light needed to actually look at the thing would collapse the superposition (in the sense of first entangling the viewer with the object, so as to perceive a single version of it in every branch, and then with the rest of the universe, so each world-branch would contain just a “classical” observation). But then again one can see interference patterns with diffracted laser light, and I’m confused about the distinction. [eta:] For example, would coherent light excite the object enough to break the superposition, or can it be used to exhibit, say, different diffraction patterns when diffracted on different superpositions of the object? [eta2:] Another example: it the object’s wave-function has zero amplitude over a large enough volume, you should be able to shine light through that volume just as through empty space (or even send another barely-macroscopic object through). I can’t think of any configuration where this distinguishes between the superposition and simply the object being (classically) somewhere else, though; does anyone? (IIRC, their resonator’s size was cited as “about a billion atoms”, which turns out as a cube with .02µm sides for silicon; when bright light is shined at a happy angle, depending on the background, and especially if the thing is not cubical, you might just barely see it as a tiny speck. With an optical microscope (not bare-eyes, but still more intuitive than a computer screen) you might even make out its approximate shape. I used to play with an atomic-force microscope in college: the cantilever was about 50µm, and I could see it with ease; I don’t remember ever having seen the tip itself, which was about the scale we’re talking about, but it m
1Mitchell_Porter
Luboš Motl writes: "it's hard to look at it while keeping the temperature at 20 nanokelvin - light is pretty warm." My quick impression of how this works: You have a circuit with electrons flowing in it (picture). At one end of the circuit is a loop (Josephson junction) which sensitizes the electron wavefunctions to the presence of magnetic field lines passing through the loop. So they can be induced into superpositions - but they're just electrons. At the other end of the circuit, there's a place where the wire has a dangly hairpin-shaped bend in it. This is the resonator; it expands in response to voltage. So we have a circuit in which a flux detector and a mechanical resonator are coupled. The events in the circuit are modulated at both ends - by passing flux through the detector and by beaming microwaves at the resonator. But the quantum measurements are taken only at the flux detector site. The resonator's behavior is inferred indirectly, by its effects on the quantum states in the flux detector to which it is coupled. The quantum states of the resonator are quantized oscillations (phonons). A classical oscillation consists of something moving back and forth between two extremes. In a quantum oscillation, you have a number of wave packets (peaks in the wavefunction) strung out between the two extremal positions; the higher the energy of the oscillation, the greater the number of peaks. Theoretically, such states are superpositions of every classical position between the two extremes. This discussion suggests how the appearance of classical oscillation emerges from the distribution of peaks. So you should imagine that the little hairpin-bend part of the circuit is getting into superpositions like that, in which the elements of the superposition differ by the elongation of the hairpin; and then this is all coupled to electrons in the loop at the other end of the circuit. I think this is all quite relevant for quantum biology (e.g. proteins in superposition)
1Nick_Tarleton
Every source I've seen (e.g.) gives the resonator as flat, some tens of µm long, and containing ~a trillion atoms.
-3JamesAndrix
Duh, it would be exactly like the agents in The Matrix.

If you had to tile the universe with something - something simple - what would you tile it with?

6Clippy
Paperclips.
6RobinZ
I have no interest in tiling the universe with anything - that would be dull. Therefore I would strive to subvert the spirit of such a restriction as effectively as I could. Off the top of my head, pre-supernova stars seem like adequate tools for the purpose.
4Mitchell_Porter
Are you sure that indiscriminately creating life in this fashion is a good thing?
2RobinZ
No, but given the restrictions of the hypothetical it's on my list of possible courses of action. Were there any possibility of my being forced to make the choice, I would definitely want more options than just this one to choose from.
4Mitchell_Porter
Can the tiles have states that change and interact?
-1Alicorn
Only if that doesn't violate the "simple" condition.
0ata
What counts as simple? If something capable as serving as a cell in a cellular automaton would count as simple enough, I'd choose that. And I'd design it to very occasionally malfunction and change states at random, so that interesting patterns could spontaneously form in the absence of any specific design.
2Alicorn
Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!" or "cheesecake!", rather than "how can I game the system so that I can have interesting stuff in the universe again after the tiling happens?" You're not playing fair if you do that. I find this an interesting question because while it does seem to be a consensus that we don't want the universe tiled with orgasmium, it also seems intuitively obvious that this would be less bad than tiling the universe with agonium or whatever you'd call it; and I want to know what floats to the top of this stack of badness.
4Clippy
Mission accomplished! c=@ Now, since there seems to be a broad consensus among the posters that paperclips would be the optimal thing to tile the universe with, how about we get to work on it?
0[anonymous]
Hold on, we're still haven't settled on 'paperclips' over 'miniature smiley faces' and 'orgasmium'. Jury is still out. ;)
2wedrifid
And that is a good thing. Long live the munchkins of the universe!
0RobinZ
I think orgasmium is significantly more complex than cheesecake. Possibly complex enough that I could make an interesting universe if I were permitted that much complexity, but I don't know enough about consciousness to say.
4Peter_de_Blanc
Cheesecake is made of eukaryotic life, so it's pretty darn complex.
6wedrifid
Hmm... a universe full of cheescake will have enough hydrogen around to form stars once the cheesecakes attract each other, with further cheescake forming to planets that are are a perfect breeding ground for life, already seeded with DNA and RNA!
0RobinZ
Didn't think of that. Okay, orgasmium is significantly more complex than paperclips.
0wnoise
What? It's products of eukaryotic life. Usually the eukaryotes are dead. Though plenty of microorganisms immediately start colonizing. Unless you mean the other kind of cheesecake.
0Peter_de_Blanc
I suppose that the majority of the cheesecake does not consist of eukaryotic cells, but there are definitely plenty of them in there. I've never looked at milk under a microscope but I would expect it to contain cells from the cow. The lemon zest contains lemon cells. The graham cracker crust contains wheat. Dead cells would not be much simpler than living cells.
3jimrandomh
Copies of my genome. If I can't do anything to affect the utility function I really care about, then I might as well optimize the one evolution tried to make me care about instead. (Note that I interpret 'simple' as excluding copies of my mind, simulations of interesting universes, and messages intended for other universes that simulate this one to read, any of which would be preferable to anything simple.)
2JGWeissman
I have no preferences within the class of states of the universe that do not, and cannot evolve to, contain consciousness. But if, for example, I was put in this situation by a cheesecake maximizer, I would choose something other than cheese cake.
2Alicorn
Interesting. Just to be contrary?
6JGWeissman
Because, as near as I can calculate, UDT advises me too. Like what Wedrifid said. And like Eliezer said here: And here: I am assuming that an agent powerful enough to put me in this situation can predict that I would behave this way.
3wedrifid
It is also potentialy serves decision-theoretic purposes. Much like a Dutchess choosing not to pay off her blackmailer. If it is assumed that a cheesecake maximiser has a reason to force you into such a position (rather than doing it himself) then it is not unreasonable to expect that the universe may be better off if Cheesy had to take his second option.
0byrnema
I can't recall: do your views on consciousness have a dualist component? If consciousness is in some way transcendental (that is, as a whole somehow independent or outside of the material parts), then I understand valuing it as, for example, something that has interesting or unique potential. If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?
0JGWeissman
No, I am not a dualist. To be precise, I value positive conscious experience more than cheesecake, and negative conscious experience less than cheesecake. I assign value to things according to how they are experienced, and consciousness is required for this experience. This has to do with the abstract properties of conscious experience, and not with how it is implemented, whether by mathematical structure of physical arrangements, or by ontologically basic consciousness.
2Matt_Simpson
me (i'm assuming I'll be broken down as part of the tiling process, so this preserves me)
3wedrifid
Damn. If only I was simple, I could preserve myself that way too! ;)
1wedrifid
Witty comics. (eg)
0[anonymous]
The words "LET US OUT" in as many languages as possible.
0Rain
Isn't the universe already tiled with something simple in the form of fundamental particles?
2JGWeissman
In a tiled universe, the universe is partitioned into a grid of tiles, and the same pattern is repeated exactly in every tile, so that if you know what one tile looks like, you know what the entire universe looks like.
0Jack
A sculpture of stars, nebulae and black holes whose beauty will never be admired by anyone. ETA: If this has too little entropy to count as simple--- well whatever artwork I can get away with I'll take.
0Kevin
Computronium

Here's a way to short-circuit a particular sort of head-banging argument.

Statements may seem simple, but they actually contain a bunch of presuppositions. One way an argument can go wrong is A says something, B disagrees, A is mistaken about exactly what B is disagreeing with, and neither of them can figure out why the other is so pig-headed about something obvious.

I suggest that if there are several rounds of A and B saying the same things at each other, it's time for at least one of them to pull back and work on pinning down exactly what they're disagreeing about.

[-][anonymous]50

Survey question:

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?

I tend to have trouble with words - it can take me a long time (minutes) to recall the proper word to describe something, and when speaking I frequently ... (read more)

3wedrifid
I never see words. I feel them. Great with syntax. Access to specific words tends to degrade as I get fatigued or stressed. That is, I can 'feel' the word there and know the naunces of the meaning it represents but can not bring the actual sounds or letters to mind.
3prase
I have often troubles with finding proper words, both in English and my native language, but I have no problems with spelling - I can say it automatically. This may be because I have learned English by reading and therefore the words are stored in my memory in their written form, but generally I suspect, from personal experience, that ability to recall spelling and ability to find the proper word are unrelated.
0jimrandomh
I can visualize sentences, paragraphs, or formatted code, but can't zoom in as far as individual words; when I try I get a verbal representation instead. I usually can't read over misspelled words (or wrong words, like its vs. it's) without stopping. When this happens, it feels like hearing someone misparonounce a word. When spelling a word aloud, it comes out pretty much automatically (verbal memory) with no perceptible intermediate steps. I would describe myself as adept with language.
0MendelSchmiedekamp
In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback. As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.
0Kevin
I'm adept at language and I never visualize letters or words in my head. I think in pronounced/internally spoken words, so when I spell something aloud I think the letters to myself as I am saying them.
0FAWS
This is turning interesting: Sensory type of access to spelling information by poster: hegemonicon: verbal (?) ( visual only with great difficulty) Hook: mechanical FAWS: mechanical, visual prase: verbal (???) NancyLebovitz: visual Morendil: visual mattnewport: visual, mechanical (?) Rain: mechanical (???) Kevin: verbal (???) (never visual) Is there anyone who doesn't fall into at least one of those three categories?
0Rain
When I spell out a word, I don't visualize anything. Using words in conversation, typing, or writing is also innate - they flow through without touching my consciousness. This is another aspect of my maxim, "my subconscious is way smarter than I am." It responds quickly and accurately, at any rate. I consider myself to be adept at the English language, and more objective evidence bears that out. I scored 36/36 on the English portion of the ACT, managed to accumulate enough extra credit through correcting my professor in my college level writing class that I didn't need to take the final, and many people have told me that I write very well in many different contexts (collaborative fiction, business reports, online forums, etc.). I would go so far as to say that if I make an effort to improve on my communication by the use of conscious thought, I do worse than when I "feel it out."
0mattnewport
I have pretty good language skills and I think I am above average at both spelling in my own writing and spotting spelling mistakes when reading but I do not find it particularly easy to spell a difficult word out loud, it is a relatively effortful process unlike the process when reading or writing which is largely automatic and effortless. With longer words I feel like short term memory limitations make it difficult to spell the word out, for a difficult word I try to visualize the text and 'read off' the spelling but that can be taxing for longer words. I may end up having to write it down in order to be sure the spelling is correct and to be able to read it out. Growing up in England I was largely unaware of the concept of a spelling bee so this is not a skill I ever practiced to any great extent.
0Morendil
My experience of spelling words is quite visual (in contrast to my normal thinking style, which suggests that if "thinking styles" exist they are not monolithic), I literally have the visual representation of the word floating in my head. (I can tell it really is visual because I can give details, such as what kind of font - serif - or what color - black - they are the words as they'd appear in a book.) I'd also describe my spelling skill as "automatic", i.e. I can usually instantly spot whether a word is "right" or "not right", I absolutely cannot stand misspellings (including mine, I have the hardest time when writing fast because I must instantly go back and correct any typos, rather than let them be), and they tend to leap out of the page; most people appear to have an ability to ignore typos that I lack. (For instance, I often get a kick out of spotting typos on the freakin' front page of national magazines, and when I point them out I mostly get blank stares or "Oh, you're right" - people just don't notice!) I'd self-describe as adept at language. (ETA: upvoted for a luminous question.)
1[anonymous]
After a bit of self-experimentation, I've concluded that I almost (but not quite) completely lack any visual experience accompanying anything verbal. Even when I self-prompt, telling myself to spell a word, nothing really appears by default (though I can make an image of the word appear with a bit of focus, it's very difficult to try to 'read' off of it). I wonder how typical (or atypical) this is.
0h-H
quite typical.
0NancyLebovitz
Do you get any visual images when you read?
0[anonymous]
Not generally, no, for either fiction or non-fiction. This may be why i've never been able to relate to the sense of getting 'lost' inside a book - they've never been as evocative for me as they seem to for others.
0NancyLebovitz
If I'm trying to spell a word out loud and it's difficult for me, it appears in my head, but not necessarily as a whole, and I'll be doing checking as I go. This is interesting, I'd swear the words are appearing more clearly in my mind now that you've brought this up. I'm pretty adept at straightforward language. I can usually find the words I want, and if I'm struggling to find a word, sometimes it's a memory glitch, but it's more likely to be pinning down a meaning. Sometimes I can't find a word, and I blame the language. There doesn't seem to be an English word which includes planning as part of an action. I do creative work with words on a small scale-- coming up with button slogans, and sometimes I'm very fast. (Recent examples: "Life is an adventure-- bring a webcam", "God is watching. It's a good thing He's easily amused.") "Effortlessly" might be a better word than "automatically", but it took me a couple of minutes to think of it. Writers typically need to revise. You might be able to become more facile, but it's also possible that you're imagining that other writers have it easier than you do. Also, you may be slowing yourself down more than necessary by trying to get it right at the beginning. Many people find they're better off letting the very first draft just flow, and then do the editing. I use a mixed strategy-- I edit somewhat on the first pass, aiming for satisfactory, and then I go over it a time or two for improvements. I suspect that really fast writers (at least of the kind I see online) are doing most of their writing by presenting things they've already thought out.
0[anonymous]
Yeah, it's difficult to seperate out what's related to abstract thought (as opposed to language), what's typical a typical language difficulty, and what's a quirk of my particular brain. It is somewhat telling that your (and everyone elses) response doesn't fit neatly into my 'appears in your head or not' dichotomy.
0FAWS
Neither, really. For simple frequent words I remember each letter individually, but otherwise I either have to write it down using the mechanical memory to retrieve the spelling or I have to look at the complete word to test whether it looks right. I can test a spelling by imagining how it looks, but that's not as reliable as seeing it with my eyes, and of course writing it down and then looking at it is always best (short of looking it up of course).
0Hook
Spelling a word out loud is an infrequent task for me. I have to simulate writing or typing it and then dictate the result of that simulation. I would characterize myself as adept at language. Choosing the appropriate words comes easily to me, and I don't think this skill is related to spelling bee performance.
0[anonymous]
My focus isn't so much on the spelling per se, but how much conscious thought 'comes along for the ride' while its being done.

If any aspiring rationalists would like to try and talk a Stage IV cancer patient into cryonics... good luck and godspeed. http://www.reddit.com/r/IAmA/comments/bj3l9/i_was_diagnosed_with_stage_iv_cancer_and_am/c0n1kin?context=3

1Kevin
I tried, it didn't work. Other people can still try! I didn't want to give the hardest possible sell because survival rates for Stage IV breast cancer are actually really good.

Nature doesn't grade on a curve, but neither does it punish plagiarism. Is there some point at which someone who's excelled beyond their community would gain more by setting aside the direct pursuit of personal excellence in favor of spreading what they've already learned to one or more apprentices, then resuming the quest from a firmer foundation?

0Morendil
Teaching something to others is often a way of consolidating the knowledge, and I would argue that the pursuit of personal excellence usually requires sharing the knowledge at some point, and possibly on an ongoing basis. See e.g. Lave and Wenger's books on communities of practice and "learning as legitimate peripheral participation".
[-][anonymous]40

I really should probably think this out clearer, but I've had an idea a few days now that keeps leaving and coming back. So I'm going to throw the idea out here and if it's too incoherent, I hope either someone gets where I'm going or I come back and see my mistake. At worst, it gets down-voted and I'm risking karma unless I delete it.

Okay, so the other day I was discussing with a Christian friend who "agrees with micro-evolution but not macro-evolution." I'm assuming other people have heard this idea before. And I started to think about the idea... (read more)

3RobinZ
I'm sure it's a factor, but I suspect "it contradicts my religion" is the larger. Assuming that's not it: how often do mutations happen, how much time has passed, and how many mutations apart are different species? The first times the second should dwarf the third, at which point it's like that change-one-letter game. Yes, every step must be a valid word, but the 'limit' on how many tries is so wide that it's easy.
2rwallace
Sounds likely to me. I don't know exactly what wording I'd use, but some food for thought: when Alfred Wallace independently rediscovered evolution, his paper on the topic was titled On the Tendency of Varieties to Depart Indefinitely from the Original Type. You can find the full text at http://www.human-nature.com/darwin/archive/arwallace.html - it's short and clear, and from my perspective offers a good approach to understanding why existing species are not ontologically fundamental.
1Nisan
That's a good idea; it's tempting to believe that a category is less fuzzy in reality than it really is. I would point out recent examples of speciation including the subtle development of the apple maggot, and fruit fly speciation in laboratory experiments. If you want to further mess with their concept of species, tell them about ring species (which are one catastrophe away from splitting into two species).
[-]Cyan40

Daniel Dennett and Linda LaScola have written a paper about five non-believing members of the Christian clergy. Teaser quote from one of the participants:

I think my way of being a Christian has many things in common with atheists as [Sam] Harris sees them. I am not willing to abandon the symbol ‘God’ in my understanding of the human and the universe. But my definition of God is very different from mainline Christian traditions yet it is within them. Just at the far left end of the bell shaped curve.

Don't know if this will help with cryonics or not, but it's interesting:

Induced suspended animation and reanimation in mammals (TED Talk by Mark Roth)

[Edited to fix broken link]

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

4Bo102010
From the site: Man, you just know it's going to be a fun read...
2khafra
I didn't notice this thread, but ran across Anderson on a facebook group and asked about her site in another thread. JoshuaZ wrote a good analysis.

I know it's no AI of the AGI kind but what do folks think of this? It certainly beats the pants off any of the stuff I was doing my AI research on...

2Mass_Driver
Looks like a step in the right direction -- kind of obvious, but you do need both probabilistic reasoning and rules to get reality-interpretation.
0Richard_Kennaway
Looks like the usual empty promises to me.

What's the best way to respond to someone who insists on advancing an argument that appears to be completely insane? For example, someone like David Icke who insists the world is being run by evil lizard people? Or your friend the professor who thinks his latest "breakthrough" is going to make him the next Einstein but, when you ask him what it is, it turns out to be nothing but gibberish, meaningless equations, and surface analogies? (My father, the professor, has a friend, also a professor, who's quickly becoming a crank on the order of the Tim... (read more)

7wnoise
If you have no emotional or other investment, the best thing to do is not engage.
7CronoDAS
http://xkcd.com/154/
0[anonymous]
Well, yes, the "if" is critical.
1CannibalSmith
When rational argument fails, fall back to dark arts. If that fails, fall back to damage control (discredit him in front of others). All that assuming it's worth the trouble.

I have a line of thinking that makes me less worried about unfriendly AI. The smarter an AI gets, the more it is able to follow its utility function. Where the utility function is simple or the AI is stupid, we have useful things like game opponents.

But as we give smarter AI's interesting 'real world' problems, the difference between what we asked for and what we want shows up more explicitly. Developers usually interpret this as the AI being stupid or broken, and patch over either the utility function or the reasoning it led to. These patches don't lead t... (read more)

2Vladimir_Nesov
Rather, where the utility function is simple AND the program is stupid. Paperclippers are not useful things. Reinforcement-based utility definition plus difficult games with well-defined winning conditions seems to constitute a counterexample to this principle (a way of doing AI that won't hit the wall you described). This could function even on top of supplemental ad-hoc utility function building, as in chess, where a partially hand-crafted utility function over specific board positions is an important component of chess-playing programs -- you'd just need to push the effort to a "meta-AI" that is only interested in the real winning conditions.
2JamesAndrix
I was thinking of current top chess programs as smart(well above average humans), with simple utility functions. This is a good example, but it might not completely explain it away. Can we, by hand or by algorithm, construct a utility function that does what we want, even when we know exactly what we want? I think you could still have a situation in which a smarter agent does worse because it's learned utility function does not match the winning conditions (it's learned utility function would constitute a created subgoal of "maximize reward") Learning about the world and constructing subgoals would probably be part of any near-human AI. I don't think we have a way to construct reliable subgoals, even with a rules-defined supergoal and perfect knowledge of the world. (such a process would be a huge boon for FAI) Likewise, I don't think we can be certain that the utility functions we create by hand would reliably lead a high-intelligence AI to seek the goal we want, even for well-defined tasks. A smarter agent might have the advantage of learning the winning conditions faster, but if it is comparatively better at implementing a flawed utility function than it is at fixing it's utility function, then could be outpaced by stupider versions, and you're working more in an evolutionary design space. So I think it would hit the same kind of wall, at least in some games.
0Vladimir_Nesov
I meant the AI to be limited to the formal game universe, which should be easily feasible for non-superintelligent AIs. In this case, smarter agents always have an advantage, maximization of reward is the same as the intended goal. Thinking deeply until you get eaten by a sabertooth is not smart.
0JamesAndrix
Answer is here, thinking out loud is below If you give the AI a perfect utility function for a game, it still has to break down subgoals and seek those. You don't have a good general theory for making sure your generated subgoals actually serve your supergoals, but you've tweaked things enough that it's actually very good at achieving the 'midlevel' things. When you give it something more complex, it improperly breaks down the goal into faulty subgoals that are ineffective or contradictory, and then effectively carries out each of them. This yields a mess. At this point you could improve some of the low level goal-achievement and do much better at a range of low level tasks, but this wouldn't buy you much in the complex tasks, and might just send you further off track. If you understand that the complex subgoals are faulty, you might be able to re-patch it, but this might not help you solve different problems of similar complexity, let alone more complex problems. What led me to this answer: There may not be a trade off at play here. For example: At each turn you give the AI indefinite time and memory to learn all it can from the information it has so far, and to plan. (limted by your patience and budget, but let's handwave that computation resources are cheap, and every turn the AI comes in well below it's resource limit.) You have a fairly good move optimizer that can achieve a wide range of in game goals, and a reward modeler that tries to learn what it is supposed to do and updates the utility function. But how do they know how to maximize reward? I was assuming they have to learn the reward criteria. If they have a flawed concept of that criteria, they will seek non-reward. If the utility function is one and the same as winning, then the (see Top)
0Vladimir_Nesov
End-of-conversation status: I don't see a clear argument, and failing that, I can't take confidence in a clear lawful conclusion (AGI hits a wall). I don't think this line of inquiry is worthwhile.
[-]ata20

I'm looking for a quote I saw on LW a while ago, about people who deny the existence of external reality. I think it was from Eliezer, and it was something like "You say nothing exists? Fine. I still want to know how the nothing works."

Anyone remember where that's from?

4SilasBarta
Coincidentally, I was reading the quantum non-realism article when writing my recent understanding your understanding article, and that's where it's from -- though he mentions it actually happened in a previous discussion and linked to it. The context in the LW version is: (I was actually inspired by that to say something similar in response to an anti-reductionist's sophistry on another site, but that discussion's gone now.)
0ata
Ah, thanks.

Hello. Do people here generally take anthropic principle as strong evidence against positive singularity? If we take it that in the future it would be good to have many, happy people, like, using most matter available to make sure that this happens, we'd get really many happy people. However, we are not any one of those happy people. We're living in pre-singularity times, and this seems to be strong evidence that we're going to face a negative singularity.

0Kevin
The simulation argument muddles the issue from my perspective. There's more to weigh than just the anthropic principle.
0Jonii
How?

This is pretty pathetic, at least if honestly reported. (A heavily reported study's claim to show harmful effects from high-fructose corn syrup in rats is based on ambiguous, irrelevant, or statistically insignificant experimental results.)

0RobinZ
I'm reading the paper now, and I see in the "Methods" section: which the author of the blog post apparently does not acknowledge. I'll grant that the study may be overblown, but it is not as obviously flawed as I believe the blogger suggested.

How do Bayesians look at formal proofs in formal specifications? Do they believe "100%" in them?

7ata
You can believe that it leads to a 100%-always-true-in-every-possible-universe conclusion, but the strength of your belief should not be 100% itself. The difference is crucial. Good posts on this subject are How To Convince Me That 2 + 2 = 3 and Infinite Certainty. (The followup, 0 And 1 Are Not Probabilities, is a worthwhile explanation of the mathematical reasons that this is the case.)
0murat
Thank you for the links. It makes sense now.

Cryonics in popular culture:

4cousin_it
I still fail to see how Bayesian methods eliminate fluke results or publication bias.

Is independent AI research likely to continue to be legal?

At this point, very few people take the risks seriously, but that may not continue forever.

This doesn't mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it's a possibility

If there's a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?

5khafra
Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I've never seen AI research fictionally outlawed until it's done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.
5SilasBarta
Fictional evidence should be avoided. Also, this subject seems very prime for a moral panic, i.e., "these guys are making Terminator".
1h-H
how would it be stopped if it were illegal? unless information tech suddenly goes away it's impossible.
2khafra
NancyLebovitz wasn't suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does--preparation would be warranted if such legislation were likely. I don't think it's likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI's primary goals.
2NancyLebovitz
Bingo. That's exactly what I was concerned about. You're probably right that there's no practical thing to be done now. I'm sure you're know very quickly if restrictions on independent AI research are being considered. The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don't know if there are precautions for that one.
3NancyLebovitz
I've been thinking about that, and I believe you're right that laws typically don't get passed against hypothetical harms, and also that AI research isn't the kind of thing that's enough fun to think about to set off a moral panic. However, I'm not sure whether real harm that society can recover from is a possibility. I'm basing the possibility on two premises-- that a lot of people thinking about AI aren't as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat. Suppose that a self-improving AI breaks the financial markets-- there might just be efforts to protect the markets, or AI might be an issue in itself.
1cousin_it
Witchcraft? Labeling of GM food?
2NancyLebovitz
Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can't prove it.
0RobinZ
Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple. Edit: Granted, most of the reaction is very much overblown.
0Kevin
I'm pretty sure Eliezer commented publicly on this and I think his answer was that it doesn't make sense to outlaw AI research. 10 free karma to whoever can find the right link.
4Vladimir_Nesov
The question was, "Is independent AI research likely to continue to be legal?". What Eliezer considers a reasonable policy isn't necessarily related to what government considers a reasonable policy. Though I think the answer to both questions is the same, for unrelated reasons.
3Nick_Tarleton
AI as a Positive and Negative Factor in Global Risk (section 10) discusses this. More obsoletely, so do CFAI (section 4) and several SL4 posts (e.g. this thread from 2003).

First Clay Millennium Prize goes to Grigoriy Perelman

http://news.ycombinator.com/item?id=1202591

-12Singularity7337

Tricycle has a page up called Hacking on Less Wrong which describes how to get your very own copy of Less Wrong running on your computer. (You can then invite all your housemates to register and then go mad with power when you realize you can ban/edit any of their comments/posts. Hypothetically, I mean. Ahem.)

I've updated it a bit based on my experience getting it to run on my machine. If I've written anything terribly wrong, someone let me know =)

0Jack
This would be an interesting classroom tool.

Nanotech robots deliver gene therapy through blood

http://www.reuters.com/article/idUSTRE62K1BK20100321

What Would You Do With 48 Cores? (essay contest)

http://blogs.amd.com/work/2010/03/03/48-cores-contest/

5bogus
That's actually a very interesting question. You'd want a problem which: 1. is either embarrassingly parallel or large enough to get a decent speedup, 2. involves a fair amount of complex branching and logic, such that GPGPU would be unsuitable, 3. cannot be efficiently solved by "shared nothing", message-passing systems, such as Beowulf clusters and grid computing. The link also states that the aim should be "to help society, to help others" and to "make the world a better, more interesting place". Here's a start; in fact, many of these problems are fairly relevant to AI.
3CannibalSmith
Finally get to play Crysis. Write a real time ray tracer.

Random observation: type in the first few letters of 'epistemic' and google goes straight to suggesting 'epistemological anarchism'. It seems google is right on board with helping SMBC further philosophical education.

Does anyone know which arguments have been made about ETA of strong AI on the scale of "is it more likely to be 30, 100, or 300 years?"

Michael Arrington: "It’s time for a centralized, well organized place for anonymous mass defamation on the Internet. Scary? Yes. But it’s coming nonetheless."

http://techcrunch.com/2010/03/28/reputation-is-dead-its-time-to-overlook-our-indiscretions/

0Jack
Meh. I think Arrington and this company are overestimating the market. JuicyCampus went out of business for a reason and they had the advantage of actually targeting existing social scenes instead of isolated individuals. Here is how my campus's juicy campus page looked over time (apologies for crudeness): omg! we have a juicycampus page! Who is the hottest girl? (Some names) Who is the hottest guy? (Some names) Alex Soandso is a slut! Shut up Alex is awesome and you have no friends! Who are the biggest players? (Some names) Who had sex with a professor? (No names) Who is the hottest professor? (A few names) Who has the biggest penis? ... This lasted for about a week. The remaining six months until JuicyCampus shut down consisted of parodies about how awesome Steve Holt is and awkward threads obviously contrived by employees of Juicy Campus trying to get students to keep talking. Because these things are uncensored the signal to noise ratio is just impossible to deal with. Plus for this to be effective you would have to be routinely looking up everyone you know. I guess you could have accounts that tracked everyone you knew... but are you really going to show up on a regular basis just to check? It does look like some of these gossip sites have been successful with high schools but those are far more insular and far more gossip-centered places than the rest of the world.
0Kevin
I'll be very surprised if this particular company is a success, but I don't think it's an impossible problem and I think there is probably some sort of a business execution/insight that could make such a company a very successful startup. The successful versions of companies in this space will look a lot more like reputational economies and alternative currencies than marketplaces for anonymous libel like JuicyCampus.
[-][anonymous]00

So, while in the shower, an idea for an FAI came into my head.

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs. So, I figured that if you simply told two (or more) AGIs to fight over one database of information, the most rational AGI would be able to set the database to contain the correct information. (Another intuition of mine tells me that FAI is a problem of rationality: once ... (read more)

1ata
That... doesn't sound right at all. It does sound like how people intuitively think about proof/reasoning (even people smart enough to be thinking about things like, say, set theory, trying to overturn Cantor's diagonal argument with a counterexample without actually discovering a flaw in the theorem), and how we think about debates (the guy on the left half of the screen says something, the guy on the right says the opposite, and they go back and forth taking turns making Valid Points until the CNN anchor says "We'll have to leave it there" and the viewers are left agreeing with (1) whoever agreed with their existing beliefs, or, if neither, (2) whoever spoke last). But even if our current formal understanding of reasoning is incomplete, we know it's not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren't fighting each other, they're collaborating on determining what your map should look like and how confident you should be. Why would we build AGI to have "pet truths", to engage in rationalization rather than rationality, in the first place?
1[anonymous]
Yeah. So if one guy presents only evidence in favor, and the other guy presents only evidence against, they're adversaries. One guy can state a theory, show that all existing evidence supports it, and thereby have "proved" it, and then the other guy can state an even better theory, also supported by all the evidence but simpler, thereby overturning that proof. We wouldn't do it on purpose!
0bogus
Game semantics works somewhat like this; a proof is formalized as an "argument" between a Proponent and an Opponent. If an extension of game semantics to probabilistic reasoning exists, it will work much like the 'theory of uncertain arguments' you mention here.
0[anonymous]
I seem to have man-with-a-hammer syndrome, and my hammer is economics. Luckily, I'm using economics as a tool for designing stuff, not for understanding stuff; there is no One True Design the way there's a One True Truth.

Could use some comment thread ringers here: http://news.ycombinator.com/item?id=1218075

[-][anonymous]00

This is what non-reductionism looks like:

In a certain world, it's possible to build stuff. For example, you can build a ship. You build it out of some ingredients, such as wood, and by doing a bit of work. The thing is, though, there's only one general method that can possibly used to build a ship, and there are some things you can do that are useful only for building a ship. You have some freedom within this method: for example, you can give your ship 18 masts if you want to. However, the way you build the ship has literally nothing to do with the end res... (read more)

Let's say Omega opens a consulting service, but, for whatever reason, has sharply limited bandwidth, and insists that the order in which questions are presented be determined by some sort of bidding process. What questions would you ask, and how much would you be willing to pay per byte for the combined question and response?

0Sly
How many know about this, and are games such as the lottery, and sports betting still viable? Lottery numbers / stock changes seem like the first impression answer to me.
0Strange7
It's public knowledge. Omega is extraordinarily intelligent, but not actually omniscient, and 'I don't know' is a legitimate answer, so casinos, state lotteries, and so on would pay exorbitant amounts for a random-number generator that couldn't be cost-effectively predicted. Sports oddsmakers and derivative brokers, likewise, would take the possibility of Omega's advice into account.

Fictional representation of an artificial intelligence which does not value self-preservation., and the logical consequences thereof.

This will be completely familiar to most of us here, but "What Does a Robot Want?" seems to rederive a few of Eliezer's comments about FAI and UFAI in a very readable way - particularly those from Points of Departure. (Which, for some reason, doesn't seem to be included in any indexed sequence.)

The author mentions using these ideas in his novel, Free Radical - I can attest to this, having enjoyed it partly for that reason.

People gathering here, mostly assume that the evolution is slow and stupid, no match for intelligence at all. That the human, let alone superintelligence is for several orders of magnitude smarter than the process which created us in recent several billion years.

Well, despite many fancy mathematical theories of packing, some best results came from the so called digital evolution. Where the only knowledge is, that "the overlapping is bad and a smaller frame is good". Everything else is a random change and nonrandom selection.

Every previously intelligently developed solution, stupidly evolves fast from scratch here: http://critticall.com/SQU_cir.html

Does anyone have any spare money on In Trade? The new Osama Bin Laden contract is coming out and I would like to buy some. If anyone has some money on In Trade, I would pay a 10% premium.

Also, is there anyone here who thinks the In Trade Osama contracts are priced too highly? http://www.intrade.com/jsp/intrade/contractSearch/index.jsp?query=Osama+Bin+Laden+Conclusion

Here's a puzzle that involves time travel:

Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your wor... (read more)

Can't answer until I know the laws of time travel.

No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?

Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.

2cousin_it
Why is something like this an acceptable answer here, but not in Newcomb's Problem or Counterfactual Mugging?
4Vladimir_Nesov
Because it's clear what the intended clarification of these experiments is, but less so for time travel. When the thought experiments are posed, the goal is not to find the answer to some question, but to understand the described situation, which might as well involve additionally specifying it.
2Nick_Tarleton
I can't imagine what you would want to know more about before giving an answer to Newcomb. Do you think Omega would have no choice but to use time travel?
2cousin_it
No, but the mechanism Omega uses to predict my answer may be relevant to solving the problem. I have an old post about that. Also see the comment by Toby Ord there.
0Morendil
Because these don't involve time travel, but normal physics?
0Nick_Tarleton
He did say "something like this", not "this".
0Nisan
I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven't been tested yet. But yes, that wouldn't be telling you how to handle probabilities. So, it looks like this is a situation where the prior you were born with is as good as any other.
4Alicorn
Why am I firmly committed to realizing the future the machine shows? Do I believe that to be contrary would cause a paradox and explode the universe? Do I believe that I am destined to achieve whatever is foretold, and that it'll be more pleasant if I do it on purpose instead of forcing fate to jury-rig something at the last minute? Do I think that it is only good and right that I do those things which are depicted, because it shows the locally best of all possible worlds? In other words, what do I hypothetically think would happen if I weren't fully committed to realizing the future shown?
1Sniffnoy
Agree with the question of why you would be doing this; sounds like optimizing on the wrong thing. Supposing that it showed me having won the lottery and having a cow in my workshop, it seems silly to suppose that bringing a cow into my workshop will help me win the lottery. We can't very well suppose that we were always wanting to have a cow in our workshop, else the vision of the future wouldn't affect anything.
0Nisan
I stipulated that you're committed to realizing the future because otherwise, the problem would be too easy. I'm assuming that if you act contrary to what you see in the machine, fate will intervene. So if you're committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.
1wedrifid
Depends on the details of the counter-factual science. Does not depend on my firm commitment.
0Nisan
I was thinking of a closed time-like curve governed by general relativity, but I don't think that tells you anything. It should depend on your commitment, though.

So healthcare passed. I guess that means the US goes bankrupt a bit sooner than I'd expected. Is that a good or a bad thing?

1Kevin
I think you're being overly dramatic. Nate Silver has some good numerical analysis here: http://www.fivethirtyeight.com/2009/12/why-progressives-are-batshit-crazy-to.html I don't think that US government debt has much connection to reality any more. The international macroeconomy wizards seem to make things work. Given their track record, I am confident that the financial wizards can continue to make a fundamentally unsustainable balance sheet sustainable, at least until the Singularity. So I think that the marginal increase in debt from the bill is a smaller risk to the stability of the USA than maintaining the very flawed status quo of healthcare in the USA. Useful question: When does the bill go into effect? My parent's insurance is kicking me off at the end of the month and it will be nice to be able to stay on it for a few more years.

On a serious note, what is your (the reader's) favorite argument against a forum?

("I voted you down because this is not a meta thread." is also a valid response.)

8CannibalSmith
The voting system is of utmost importance and I'd rather be inconvenienced by the current system than have a karma-free zone on this site.
1Larks
General risk-aversion; LW is a city on a hill, and the only one, so we should be very warey of fiddling unnecessarily.
3SilasBarta
Aha! The LW brand of conservatism!
1Larks
Having said that, there are differences between my view and the one mattnewport mentioned. I don't necessarily believe the institutions exist for a good reason; it's not obviously related to the accumulated wisdom of ages or anything. Rather, out of all the internet sites out there, some were bound to turn out well. However, given this, it makes more sense for those to maintain their current standards and for others to copy and implement their ideas, and then experiment, than for Atlas to experiment with better ways of holding up the world. For the same reason we worry about existential risks to humanity, we should worry about existential risks to LW. Of course, this argument is strong in proportion to how exceptional LW is, and weak in proportion to how close other sites are.
[+]roland-70