Open Thread: March 2010, part 3

3 Post author: RobinZ 19 March 2010 03:14AM

The previous open thread has now exceeded 300 comments – new Open Thread posts may be made here.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (254)

Sort By: Popular
Comment author: SilasBarta 31 March 2010 10:03:07PM *  2 points [-]

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

Comment author: Bo102010 31 March 2010 11:42:21PM 2 points [-]

From the site:

I define a "Bizarre Domain" as a problem domain that has all of these four properties: It is Chaotic, it requires a Holistic Stance, it contains Ambiguity, and it exhibits Emergent Properties. I examine sixteen kinds of problems that fall into these four categories.

Man, you just know it's going to be a fun read...

Comment author: khafra 27 September 2011 07:20:42PM 1 point [-]

I didn't notice this thread, but ran across Anderson on a facebook group and asked about her site in another thread. JoshuaZ wrote a good analysis.

Comment author: wedrifid 31 March 2010 04:50:24AM 2 points [-]

I know it's no AI of the AGI kind but what do folks think of this? It certainly beats the pants off any of the stuff I was doing my AI research on...

Comment author: Mass_Driver 31 March 2010 03:11:44PM 1 point [-]

Looks like a step in the right direction -- kind of obvious, but you do need both probabilistic reasoning and rules to get reality-interpretation.

Comment author: RichardKennaway 31 March 2010 09:12:02AM 0 points [-]

Looks like the usual empty promises to me.

Comment author: Alicorn 31 March 2010 12:38:19AM 3 points [-]

If you had to tile the universe with something - something simple - what would you tile it with?

Comment author: Clippy 31 March 2010 01:29:23AM 4 points [-]

Paperclips.

Comment author: RobinZ 31 March 2010 12:51:09AM 4 points [-]

I have no interest in tiling the universe with anything - that would be dull. Therefore I would strive to subvert the spirit of such a restriction as effectively as I could. Off the top of my head, pre-supernova stars seem like adequate tools for the purpose.

Comment author: Mitchell_Porter 31 March 2010 02:15:56AM 2 points [-]

Are you sure that indiscriminately creating life in this fashion is a good thing?

Comment author: RobinZ 31 March 2010 02:33:57AM 1 point [-]

No, but given the restrictions of the hypothetical it's on my list of possible courses of action. Were there any possibility of my being forced to make the choice, I would definitely want more options than just this one to choose from.

Comment author: jimrandomh 31 March 2010 03:35:28AM *  2 points [-]

If you had to tile the universe with something - something simple - what would you tile it with?

Copies of my genome. If I can't do anything to affect the utility function I really care about, then I might as well optimize the one evolution tried to make me care about instead.

(Note that I interpret 'simple' as excluding copies of my mind, simulations of interesting universes, and messages intended for other universes that simulate this one to read, any of which would be preferable to anything simple.)

Comment author: Mitchell_Porter 31 March 2010 12:44:03AM 2 points [-]

Can the tiles have states that change and interact?

Comment author: Alicorn 31 March 2010 12:47:19AM 0 points [-]

Only if that doesn't violate the "simple" condition.

Comment author: ata 31 March 2010 01:37:17AM *  0 points [-]

What counts as simple?

If something capable as serving as a cell in a cellular automaton would count as simple enough, I'd choose that. And I'd design it to very occasionally malfunction and change states at random, so that interesting patterns could spontaneously form in the absence of any specific design.

Comment author: Alicorn 31 March 2010 01:41:07AM *  1 point [-]

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!" or "cheesecake!", rather than "how can I game the system so that I can have interesting stuff in the universe again after the tiling happens?" You're not playing fair if you do that.

I find this an interesting question because while it does seem to be a consensus that we don't want the universe tiled with orgasmium, it also seems intuitively obvious that this would be less bad than tiling the universe with agonium or whatever you'd call it; and I want to know what floats to the top of this stack of badness.

Comment author: Clippy 31 March 2010 02:01:05AM 2 points [-]

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!"

Mission accomplished! c=@

Now, since there seems to be a broad consensus among the posters that paperclips would be the optimal thing to tile the universe with, how about we get to work on it?

Comment author: wedrifid 31 March 2010 04:56:58AM 1 point [-]

Basically, the "simple" condition was designed to elicit answers more along the lines of "paperclips!" or "cheesecake!", rather than "how can I game the system so that I can have interesting stuff in the universe again after the tiling happens?" You're not playing fair if you do that.

And that is a good thing. Long live the munchkins of the universe!

Comment author: RobinZ 31 March 2010 01:55:03AM 0 points [-]

I think orgasmium is significantly more complex than cheesecake. Possibly complex enough that I could make an interesting universe if I were permitted that much complexity, but I don't know enough about consciousness to say.

Comment author: Peter_de_Blanc 31 March 2010 04:55:11AM 2 points [-]

Cheesecake is made of eukaryotic life, so it's pretty darn complex.

Comment author: wedrifid 31 March 2010 05:07:35AM *  5 points [-]

Hmm... a universe full of cheescake will have enough hydrogen around to form stars once the cheesecakes attract each other, with further cheescake forming to planets that are are a perfect breeding ground for life, already seeded with DNA and RNA!

Comment author: RobinZ 31 March 2010 11:00:02AM *  0 points [-]

Didn't think of that. Okay, orgasmium is significantly more complex than paperclips.

Comment author: wnoise 31 March 2010 05:29:39AM 0 points [-]

What? It's products of eukaryotic life. Usually the eukaryotes are dead. Though plenty of microorganisms immediately start colonizing.

Unless you mean the other kind of cheesecake.

Comment author: Peter_de_Blanc 31 March 2010 07:42:41PM 0 points [-]

I suppose that the majority of the cheesecake does not consist of eukaryotic cells, but there are definitely plenty of them in there. I've never looked at milk under a microscope but I would expect it to contain cells from the cow. The lemon zest contains lemon cells. The graham cracker crust contains wheat. Dead cells would not be much simpler than living cells.

Comment author: JGWeissman 31 March 2010 01:51:04AM 1 point [-]

I have no preferences within the class of states of the universe that do not, and cannot evolve to, contain consciousness.

But if, for example, I was put in this situation by a cheesecake maximizer, I would choose something other than cheese cake.

Comment author: Alicorn 31 March 2010 04:17:40AM 1 point [-]

Interesting. Just to be contrary?

Comment author: JGWeissman 31 March 2010 04:56:54AM 4 points [-]

Because, as near as I can calculate, UDT advises me too. Like what Wedrifid said.

And like Eliezer said here:

Or the Countess just decides not to pay, unconditional on anything the Baron does. Also, if the Baron ends up in an infinite loop or failing to resolve the way the Baron wants to, that is not really the Countess's problem.

And here:

As I always press the "Reset" button in situations like this, I will never find myself in such a situation.

EDIT: Just to be clear, the idea is not that I quickly shut off the AI before it can torture simulated Eliezers; it could have already done so in the past, as Wei Dai points out below. Rather, because in this situation I immediately perform an action detrimental to the AI (switching it off), any AI that knows me well enough to simulate me knows that there's no point in making or carrying out such a threat.

I am assuming that an agent powerful enough to put me in this situation can predict that I would behave this way.

Comment author: wedrifid 31 March 2010 04:32:13AM 2 points [-]

It is also potentialy serves decision-theoretic purposes. Much like a Dutchess choosing not to pay off her blackmailer. If it is assumed that a cheesecake maximiser has a reason to force you into such a position (rather than doing it himself) then it is not unreasonable to expect that the universe may be better off if Cheesy had to take his second option.

Comment author: byrnema 01 April 2010 12:47:25PM *  0 points [-]

I can't recall: do your views on consciousness have a dualist component? If consciousness is in some way transcendental (that is, as a whole somehow independent or outside of the material parts), then I understand valuing it as, for example, something that has interesting or unique potential.

If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?

Comment author: JGWeissman 02 April 2010 01:17:42AM 0 points [-]

No, I am not a dualist.

If you are not dualistic about consciousness, could you describe why you value it more than cheesecake?

To be precise, I value positive conscious experience more than cheesecake, and negative conscious experience less than cheesecake.

I assign value to things according to how they are experienced, and consciousness is required for this experience. This has to do with the abstract properties of conscious experience, and not with how it is implemented, whether by mathematical structure of physical arrangements, or by ontologically basic consciousness.

Comment author: Matt_Simpson 31 March 2010 01:35:21AM *  1 point [-]

me

(i'm assuming I'll be broken down as part of the tiling process, so this preserves me)

Comment author: wedrifid 31 March 2010 04:53:01AM 2 points [-]

Damn. If only I was simple, I could preserve myself that way too! ;)

Comment author: Rain 05 April 2010 03:57:55PM 0 points [-]

Isn't the universe already tiled with something simple in the form of fundamental particles?

Comment author: JGWeissman 05 April 2010 04:37:02PM 1 point [-]

In a tiled universe, the universe is partitioned into a grid of tiles, and the same pattern is repeated exactly in every tile, so that if you know what one tile looks like, you know what the entire universe looks like.

Comment author: Jack 31 March 2010 04:35:56AM *  0 points [-]

A sculpture of stars, nebulae and black holes whose beauty will never be admired by anyone.

ETA: If this has too little entropy to count as simple--- well whatever artwork I can get away with I'll take.

Comment author: wedrifid 31 March 2010 04:24:18AM 0 points [-]

Witty comics. (eg)

Comment author: Kevin 31 March 2010 01:31:37AM 0 points [-]

Computronium

Comment author: Strange7 30 March 2010 08:28:58PM 5 points [-]

What would be the simplest credible way for someone to demonstrate that they were smarter than you?

Comment author: wedrifid 30 March 2010 08:45:02PM *  2 points [-]

If they disagree with me and I (eventually?) agree with them, three times in a row. Applies more to questions of logic than questions of knowledge.

Comment author: RobinZ 30 March 2010 09:06:01PM 0 points [-]

I'm not sure about the "three" or the "applies more to questions of logic than questions of knowledge", but yeah, pretty much. Smarts gets you to better answers faster.

Comment author: wedrifid 30 March 2010 09:16:14PM 2 points [-]

I'm not sure about the throwaway 'three' either but the 'crystal vs fluid' is something that is true if I am considering "demonstrate to me..." I find that this varies a lot based on personality. What people know doesn't impress me nearly as much as seeing how they respond to new information, including how they update their understanding in response.

Comment author: RobinZ 30 March 2010 09:26:00PM 1 point [-]

That makes sense. Those two bit are probably fairly good approximations to correct, but I can smell a possibility of better accuracy. (For example: "logic" is probably overspecific, and experience sounds like it should land on the "knowledge" side of the equation but drawing the correct conclusions from experience is an unambiguous sign of intelligence.)

I generally agree, I'm merely less-than-confident in the wording.

Comment author: wedrifid 31 March 2010 02:36:09AM 0 points [-]

but I can smell a possibility of better accuracy.

Definitely.

(For example: "logic" is probably overspecific

Ditto.

, and experience sounds like it should land on the "knowledge" side of the equation but drawing the correct conclusions from experience is an unambiguous sign of intelligence.)

Absolutely.

I generally agree, I'm merely less-than-confident in the wording.

So am I. Improve it for me?

Comment author: RobinZ 31 March 2010 03:14:34AM *  0 points [-]

I would quickly start believing someone was smart if they repeatedly drew conclusions that looked wrong, but which I would later discover are correct. I would believe they were smarter than me if, as a rule, whenever they and I are presented with a problem, they reach important milestones in the solution or dissolution of the problem quicker than I can, even without prior knowledge of the problem.

Concrete example: xkcd #356 includes a simple but difficult physics problem. After a long time (tens of minutes) beating my head against it, letting it stew (for months, at least), and beating my head against it again (tens of minutes), I'd gotten as far as getting a wrong answer and the first part of a method. Using nothing but a verbal description of the problem statement from me, my dad pulled out the same method, noting the problem with that method which I had missed finding my wrong answer, within five minutes or so. While driving.

(I've made no progress past that insight - rot13: juvpu vf gung lbh pna (gel gb) fbyir sbe gur pheerag svryq sebz n "fbhepr" be "fvax" bs pheerag, naq gura chg n fbhepr-fvax cnve vagb gur argjbex naq nqq Buz'f-ynj ibygntrf gb trg gur erfvfgnapr - since the last time I beat my head against that problem, by the way.)

Comment author: wedrifid 31 March 2010 03:21:58AM 0 points [-]

Bah. I was hoping your dad gave the actual answer. That's as far as I got too. :)

Comment author: RobinZ 31 March 2010 03:29:00AM 0 points [-]

He suggested fbyivat n frevrf grez-ol-grez zvtug or arprffnel but I didn't know precisely what he meant or how to do it.

Comment author: wnoise 31 March 2010 04:30:48AM *  0 points [-]

The canonical method is to nggnpu n pheerag qevire gb rirel abqr. Jevgr qbja gur Xvepubss'f ynj ynj rirel abqr va grezf bs gur vawrpgrq pheerag, gur ibygntr ng gung ybpngvba, naq gur ibygntr ng rnpu nqwnprag cbvag. Erjevgr gur nqwnprag ibygntrf va grezf bs genafyngvba bcrengbef, gura qb n (frzv-qvfpergr) Sbhevre genafsbez (gur qbznva vf vagrtref, pbqbznva obhaqrq serdhrapvrf, fb vg'f gur bccbfvgr bs n Sbhevre frevrf), chg va gur pbaqvgvbaf sbe n havg zntavghqr fbhepr naq fvax, naq vaireg vg, juvpu jvyy tvir lbh gur ibygntrf rireljurer. Gur qvssreraprf va ibygntrf orgjrra gur fbhepr naq fvax vf gur erfvfgnapr, orpnhfr gurer vf havg pheerag sybjvat npebff gurz.

Comment author: Cyan 31 March 2010 04:16:53AM 0 points [-]

I know little about it, but if I knew how to compute equivalent resistances beyond the basics of resistors in parallel and in series, I'd fbyir n ohapu bs rire-ynetre svavgr tevqf, fbeg bhg gur trareny rkcerffvba sbe na A-ol-Z tevq jvgu gur gnetrg abqrf nyjnlf ng gur pragre, naq gura gnxr gur yvzvg nf A naq Z tb gb vasvavgl.

Comment author: Hook 31 March 2010 04:42:03PM 0 points [-]

It's not really all that simple, and it's domain specific, but having someone take the keyboard while pair programming helped to show me that one person in particular was far smarter than me. I was in a situation where I was just trying to keep up enough to catch the (very) occasional error.

Comment author: Morendil 31 March 2010 03:15:27PM 0 points [-]

Teach me something I didn't know.

Comment author: wedrifid 31 March 2010 03:44:42PM 0 points [-]

Really? You're easily impressed. I can't think of one teacher from my first 12 years of education that I am confident is smarter than me. I'd also be surprised if not a single one of the people I have taught was ever smarter than me (and hence mistaken if they apply the criteria you propose). But then, I've already expressed my preference for associating 'smart' with fluid intelligence rather than crystal intelligence. Do you actually mean 'knows more stuff' when you say 'smarter'? (A valid thing to mean FWIW, just different to me.)

Comment author: Morendil 31 March 2010 03:54:36PM 0 points [-]

They were smarter than you then, in the topic area in which you learned something from them.

When you've caught up with them, and you start being able to teach them instead of them teaching you, that's a good hint that you're smarter in that topic area.

When you're able to teach many people about many things, you're smart in the sense of being abie to easily apply your insights across multiple domains.

The smartest person I can conceive of is the person able to learn by themselves more effectively than anyone else can teach them. To achieve that they must have learned many insights about how to learn, on top of insights about other domains.

Comment author: wedrifid 31 March 2010 04:04:09PM *  1 point [-]

It sounds like you do mean (approximately) 'knows more stuff' when you say 'smarter', with the aforementioned difference in nomenclature and quite probably values to me.

Comment author: Morendil 31 March 2010 04:22:43PM *  0 points [-]

I don't think that's a fair restatement of my expanded observations. It depends on what you mean by "stuff" - I definitely disagree if you substitute "declarative knowledge" for it, and this is what "more stuff" tends to imply.

If "stuff" includes all forms of insight as well as declarative knowledge, then I'd more or less agree, with the provision that you must also know the right kind of stuff, that is, have meta-knowledge about when to apply various kinds of insights.

I quite like the frame of Eliezer's that "intelligence is efficient cross-domain optimization", but I can't think of a simple test for measuring optimization power.

The demand for "the simplest credible way" sounds suspiciously like it's asking for a shortcut to assessing optimization power. I doubt that there is such a shortcut. Lacking such a shortcut, a good proxy, or so it seems to me, is to assess what a person's optimization power has gained them: if they possess knowledge or insights that I don't, that's good evidence that they are good at learning. If they consistently teach me things (if I fail to catch up to them), they're definitely smarter. So each thing they teach me is (probabilistic) evidence that they are smarter.

Hence my use of "teach me something" as a unit of evidence for someone being smarter.

Comment author: wedrifid 01 April 2010 12:29:11AM 0 points [-]

I don't think that's a fair restatement of my expanded observations. It depends on what you mean by "stuff" - I definitely disagree if you substitute "declarative knowledge" for it, and this is what "more stuff" tends to imply.

That's reasonable. I don't mean to reframe your position as something silly, rather I say that I do not have a definition of 'smarter' for which the below is true:

They were smarter than you then, in the topic area in which you learned something from them.

When you've caught up with them, and you start being able to teach them instead of them teaching you, that's a good hint that you're smarter in that topic area.

I agree with what you say here:

The demand for "the simplest credible way" sounds suspiciously like it's asking for a shortcut to assessing optimization power. I doubt that there is such a shortcut. Lacking such a shortcut, a good proxy, or so it seems to me, is to assess what a person's optimization power has gained them: if they possess knowledge or insights that I don't, that's good evidence that they are good at learning. If they consistently teach me things (if I fail to catch up to them), they're definitely smarter. So each thing they teach me is (probabilistic) evidence that they are smarter.

..but with a distinct caveat of all else being equal. ie. If I deduce that someone has x amount of more knowledge than me then that can be evidence that they are not smarter than me if their age or position is such that they could be expected to have 2x more knowledge than me. So in the 'my teachers when I was 8' category it would be a mistake (using my definition of 'smarter') to make the conclusion: "They were smarter than you then, in the topic area in which you learned something from them".

Comment author: Jack 30 March 2010 11:03:26PM *  0 points [-]

Mathematical ability seems to be a high sensitivity test for this. I cannot recall ever meeting someone who I concluded was smarter than me who was not also able to solve and understand math problems I cannot. But it seems to have a surprisingly low specificity-- people who are significantly better at math than me (and this includes probably everyone with a degree in a math heavy discipline) are still strangely very stupid.

Hypotheses:

  1. The people who are better at math than me are actually smarter than me, I'm too dumb to realize it.
  2. Intelligence has pretty significant domain variability and I happen to be especially low in mathematical intelligence relative to everything else.
  3. My ADHD makes learning math especially hard, perhaps I'm quite good at grasping mathematical concepts but lack the discipline to pick up the procedural knowledge others have.
  4. Lots of people of smart people compartmentalize their intelligence, they can't or won't apply it to areas other than math. (Don't know if this differs from #2 except that it makes the math people sound bad instead of me)

Ideas?

Comment author: RobinZ 31 March 2010 12:02:46AM 1 point [-]

The easiest of your hypotheses to examine is 1: can you describe (suitably anonymized, of course) three* of these stupid math wizzes and the evidence you used to infer their stupidity?

* I picked "three" because more would be (a) a pain and (b) too many for a comment.

Comment author: Jack 31 March 2010 03:53:26AM *  2 points [-]

can you describe (suitably anonymized, of course) three* of these stupid math wizzes and the evidence you used to infer their stupidity

Of course the problem is the most memorable examples are also the easiest cases.

1: Dogmatic catholic, knew for a long time without ever witnessing her doing anything clever.

2: As a nuclear physicist I assume this guy is considerably better at math than I am. But this is probably bad evidence as I only know about him because he is so stupid. But there appear to be quite a few scientists and engineers that hold superstitious and irrational beliefs: witness all the credentialed creationists.

3: Instead of just picking someone, take the Less Wrong commentariat. I suspect all but a handful of the regular commenters know more math than I do. I'm not especially smarter than anybody here. Less Wrong definitely isn't dumb. But I don't feel like I'm at the bottom of the barrel either. My sense is that my intellect is roughly comparably to the average Less Wrong commenter even though my math skills aren't. I would say the same about Alicorn, for example. She seems to compare just fine though she's said she doesn't know a lot of math. Obviously this isn't a case of people being good at math and being dumb, but it is a case of people being good at math while not being definitively smarter than I am.

Comment author: RobinZ 31 March 2010 02:26:25PM 4 points [-]

I suspect that "smarter" has not been defined with sufficient rigor here to make analysis possible.

Comment author: jimmy 31 March 2010 07:31:33PM 0 points [-]

I'm going with number 2 on this one (possibly a result of doing 4 either 'actively' or 'passively').

I have a very high error rate when doing basic math and am also quite slow (maybe even before accounting for fixing errors). People who's ability to understand math tops out at basic calculus can still beat me on algebra tests. This effect is increased by the fact that due to mathematica and such, I have no reason to store things like the algorithm for doing polynomial long division. It takes more time and errors to rederive it on the spot.

At the higher levels of math there were people in my classes who were significantly better at it than I, and at the time it seemed like they were just better than me at math in every way. Another classmate and I (who seem to be relative peers at 'math') would consistently be better at "big picture" stuff, forming analogies to other problems, and just "seeing" (often actually using the visual cortex) the answer where they would just crank through math and come out with the same answer 3 pages of neat handwriting later.

As of writing this, the alternative (self serving) hypothesis has come up that maybe those that I saw as really good as math weren't innately better than me (except for having lower error rate and possibly faster) at math, but had just put more effort into it and committed more tricks to memory. This is consistent with the fact that these were the kids that were very studious, though I don't know how much of the variance that explains.

Comment author: Mass_Driver 31 March 2010 03:28:29PM 0 points [-]

If you can't ever recall meeting someone who you concluded was smarter than you who wasn't good at X, and you didn't use any kind of objective criteria or evaluation system to reach that conclusion, then you're probably (consciously or otherwise) incorporating X into your definition of "smarter."

There's a self-promotion trap here -- you have an incentive to act like the things you're good at are the things that really matter, both because (1) that way you can credibly claim that you're at least as smart as most people, and (2) that way you can justify your decision to continue to focus on activities that you're good at, and which you probably enjoy.

I think the odds that you have fallen into this self-promotion trap are way higher than the odds for any of your other hypotheses.

If you haven't already, you may want to check out the theory of multiple intelligences and the theory of intelligence as information processing

Comment author: ata 31 March 2010 01:32:48AM 1 point [-]

I'm looking for a quote I saw on LW a while ago, about people who deny the existence of external reality. I think it was from Eliezer, and it was something like "You say nothing exists? Fine. I still want to know how the nothing works."

Anyone remember where that's from?

Comment author: SilasBarta 31 March 2010 01:47:04AM *  2 points [-]

Coincidentally, I was reading the quantum non-realism article when writing my recent understanding your understanding article, and that's where it's from -- though he mentions it actually happened in a previous discussion and linked to it.

The context in the LW version is:

My attitude toward questions of existence and meaning was nicely illustrated in a discussion of the current state of evidence for whether the universe is spatially finite or spatially infinite, in which James D. Miller chided Robin Hanson:

"Robin, you are suffering from overconfidence bias in assuming that the universe exists. Surely there is some chance that the universe is of size zero."

To which I replied:

"James, if the universe doesn't exist, it would still be nice to know whether it's an infinite or a finite universe that doesn't exist."

Ha! You think pulling that old "universe doesn't exist" trick will stop me? It won't even slow me down!

It's not that I'm ruling out the possibility that the universe doesn't exist. It's just that, even if nothing exists, I still want to understand the nothing as best I can.

(I was actually inspired by that to say something similar in response to an anti-reductionist's sophistry on another site, but that discussion's gone now.)

Comment author: ata 31 March 2010 01:53:01AM 0 points [-]

Ah, thanks.

Comment author: Jonii 29 March 2010 12:14:40AM 1 point [-]

Hello. Do people here generally take anthropic principle as strong evidence against positive singularity? If we take it that in the future it would be good to have many, happy people, like, using most matter available to make sure that this happens, we'd get really many happy people. However, we are not any one of those happy people. We're living in pre-singularity times, and this seems to be strong evidence that we're going to face a negative singularity.

Comment author: Kevin 29 March 2010 02:03:41AM *  0 points [-]

The simulation argument muddles the issue from my perspective. There's more to weigh than just the anthropic principle.

Comment author: Jonii 29 March 2010 03:11:26AM 0 points [-]

How?

Comment author: Kevin 28 March 2010 05:34:10AM *  3 points [-]

If any aspiring rationalists would like to try and talk a Stage IV cancer patient into cryonics... good luck and godspeed. http://www.reddit.com/r/IAmA/comments/bj3l9/i_was_diagnosed_with_stage_iv_cancer_and_am/c0n1kin?context=3

Comment author: Kevin 29 March 2010 02:16:56AM 1 point [-]

I tried, it didn't work. Other people can still try! I didn't want to give the hardest possible sell because survival rates for Stage IV breast cancer are actually really good.

Comment author: Nick_Tarleton 25 March 2010 10:23:18PM *  2 points [-]

This is pretty pathetic, at least if honestly reported. (A heavily reported study's claim to show harmful effects from high-fructose corn syrup in rats is based on ambiguous, irrelevant, or statistically insignificant experimental results.)

Comment author: RobinZ 26 March 2010 02:20:18AM 0 points [-]

I'm reading the paper now, and I see in the "Methods" section:

We selected these schedules to allow comparison of intermittent and continuous access, as our previous publications show limited (12 h) access to sucrose precipitates binge-eating behavior (Avena et al., 2006).

which the author of the blog post apparently does not acknowledge. I'll grant that the study may be overblown, but it is not as obviously flawed as I believe the blogger suggested.

Comment author: Strange7 25 March 2010 09:48:18PM 2 points [-]

Nature doesn't grade on a curve, but neither does it punish plagiarism. Is there some point at which someone who's excelled beyond their community would gain more by setting aside the direct pursuit of personal excellence in favor of spreading what they've already learned to one or more apprentices, then resuming the quest from a firmer foundation?

Comment author: Morendil 26 March 2010 07:08:38AM 0 points [-]

Teaching something to others is often a way of consolidating the knowledge, and I would argue that the pursuit of personal excellence usually requires sharing the knowledge at some point, and possibly on an ongoing basis.

See e.g. Lave and Wenger's books on communities of practice and "learning as legitimate peripheral participation".

Comment author: [deleted] 24 March 2010 04:56:31AM 3 points [-]

I really should probably think this out clearer, but I've had an idea a few days now that keeps leaving and coming back. So I'm going to throw the idea out here and if it's too incoherent, I hope either someone gets where I'm going or I come back and see my mistake. At worst, it gets down-voted and I'm risking karma unless I delete it.

Okay, so the other day I was discussing with a Christian friend who "agrees with micro-evolution but not macro-evolution." I'm assuming other people have heard this idea before. And I started to think about the idea of that comment, and the overarching general view of evolution, and the main differences between macro- and micro-evolution. How could one accept the idea that genes change slowly over time, thus creating slightly different organisms than their predecessors, but different species couldn't develop because of this? My thinking led me to this theory: Could it be possible that someone making this comment is making the error of the Mind Projection Fallacy? Rather than assuming species is a category in which we separate different organisms, words like "fish" and "bear" we use so we don't have to label everything by their exact genes, could they be assuming species is a part of the world itself, the same way genes are, and thus couldn't be changed?

If anyone thinks this is a possible idea, would you have an idea how to point this out to the commenter? If you don't think this is a good theory, would you explain why?

Comment author: rwallace 31 March 2010 11:20:02AM 1 point [-]

Sounds likely to me. I don't know exactly what wording I'd use, but some food for thought: when Alfred Wallace independently rediscovered evolution, his paper on the topic was titled On the Tendency of Varieties to Depart Indefinitely from the Original Type. You can find the full text at http://www.human-nature.com/darwin/archive/arwallace.html - it's short and clear, and from my perspective offers a good approach to understanding why existing species are not ontologically fundamental.

Comment author: Nisan 29 March 2010 01:34:08AM *  1 point [-]

That's a good idea; it's tempting to believe that a category is less fuzzy in reality than it really is. I would point out recent examples of speciation including the subtle development of the apple maggot, and fruit fly speciation in laboratory experiments. If you want to further mess with their concept of species, tell them about ring species (which are one catastrophe away from splitting into two species).

Comment author: RobinZ 24 March 2010 10:29:54AM 3 points [-]

I'm sure it's a factor, but I suspect "it contradicts my religion" is the larger.

Assuming that's not it: how often do mutations happen, how much time has passed, and how many mutations apart are different species? The first times the second should dwarf the third, at which point it's like that change-one-letter game. Yes, every step must be a valid word, but the 'limit' on how many tries is so wide that it's easy.

Comment author: CronoDAS 23 March 2010 11:54:55PM *  2 points [-]

What's the best way to respond to someone who insists on advancing an argument that appears to be completely insane? For example, someone like David Icke who insists the world is being run by evil lizard people? Or your friend the professor who thinks his latest "breakthrough" is going to make him the next Einstein but, when you ask him what it is, it turns out to be nothing but gibberish, meaningless equations, and surface analogies? (My father, the professor, has a friend, also a professor, who's quickly becoming a crank on the order of the TimeCubeGuy.) Or, say, this particular bit of incoherent political ranting?

Comment author: CannibalSmith 24 March 2010 02:22:45PM 1 point [-]

When rational argument fails, fall back to dark arts. If that fails, fall back to damage control (discredit him in front of others). All that assuming it's worth the trouble.

Comment author: wnoise 24 March 2010 01:33:51AM 5 points [-]

If you have no emotional or other investment, the best thing to do is not engage.

Comment author: CronoDAS 24 March 2010 04:05:30AM 5 points [-]
Comment author: MBlume 22 March 2010 08:56:54PM 1 point [-]

Tricycle has a page up called Hacking on Less Wrong which describes how to get your very own copy of Less Wrong running on your computer. (You can then invite all your housemates to register and then go mad with power when you realize you can ban/edit any of their comments/posts. Hypothetically, I mean. Ahem.)

I've updated it a bit based on my experience getting it to run on my machine. If I've written anything terribly wrong, someone let me know =)

Comment author: Jack 22 March 2010 09:34:50PM 0 points [-]

This would be an interesting classroom tool.

Comment author: Kevin 21 March 2010 09:52:31PM 1 point [-]

Nanotech robots deliver gene therapy through blood

http://www.reuters.com/article/idUSTRE62K1BK20100321

Comment author: ata 20 March 2010 10:17:23PM 18 points [-]

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

Comment author: Kevin 21 March 2010 03:42:22AM 1 point [-]

What Would You Do With 48 Cores? (essay contest)

http://blogs.amd.com/work/2010/03/03/48-cores-contest/

Comment author: CannibalSmith 24 March 2010 02:13:25PM 3 points [-]

Finally get to play Crysis.

Write a real time ray tracer.

Comment author: bogus 21 March 2010 04:06:23AM *  4 points [-]

That's actually a very interesting question. You'd want a problem which:

  1. is either embarrassingly parallel or large enough to get a decent speedup,
  2. involves a fair amount of complex branching and logic, such that GPGPU would be unsuitable,
  3. cannot be efficiently solved by "shared nothing", message-passing systems, such as Beowulf clusters and grid computing.

The link also states that the aim should be "to help society, to help others" and to "make the world a better, more interesting place". Here's a start; in fact, many of these problems are fairly relevant to AI.

Comment author: dclayh 20 March 2010 07:35:45PM *  2 points [-]

Cryonics in popular culture:

Comment author: murat 20 March 2010 10:21:10PM 1 point [-]

How do Bayesians look at formal proofs in formal specifications? Do they believe "100%" in them?

Comment author: ata 20 March 2010 10:26:14PM *  4 points [-]

You can believe that it leads to a 100%-always-true-in-every-possible-universe conclusion, but the strength of your belief should not be 100% itself. The difference is crucial. Good posts on this subject are How To Convince Me That 2 + 2 = 3 and Infinite Certainty. (The followup, 0 And 1 Are Not Probabilities, is a worthwhile explanation of the mathematical reasons that this is the case.)

Comment author: murat 21 March 2010 11:05:18AM 0 points [-]

Thank you for the links. It makes sense now.

Comment author: JulianMorrison 20 March 2010 01:31:07AM 5 points [-]

Just a thought about the Litany of Tarski - be very careful to recognize that the "not" is a logical negation. If the box contains not-a-diamond your assumption will likely be that it's empty. The frog that jumps out when you open it will surprise you!

The mind falls easily into oppositional pairs of X and opposite-of-X (which isn't the same as the more comprehensive not-X), and once you create categorizations, you'll have a tendency to under-consider outcomes that don't categorize.

Comment author: Cyan 20 March 2010 01:42:13AM 4 points [-]

Daniel Dennett and Linda LaScola have written a paper about five non-believing members of the Christian clergy. Teaser quote from one of the participants:

I think my way of being a Christian has many things in common with atheists as [Sam] Harris sees them. I am not willing to abandon the symbol ‘God’ in my understanding of the human and the universe. But my definition of God is very different from mainline Christian traditions yet it is within them. Just at the far left end of the bell shaped curve.

Comment author: NancyLebovitz 20 March 2010 02:26:41AM 3 points [-]

Here's a way to short-circuit a particular sort of head-banging argument.

Statements may seem simple, but they actually contain a bunch of presuppositions. One way an argument can go wrong is A says something, B disagrees, A is mistaken about exactly what B is disagreeing with, and neither of them can figure out why the other is so pig-headed about something obvious.

I suggest that if there are several rounds of A and B saying the same things at each other, it's time for at least one of them to pull back and work on pinning down exactly what they're disagreeing about.

Comment author: Rain 19 March 2010 03:18:37PM *  19 points [-]

What is the appropriate method to tap out when you don't want to be thrown to the rationality mat any more?

What's the best way for me to stop a thread when I no longer wish to participate, as my emotions are turning sour, and I recognize I will begin saying bad things?

Comment author: Morendil 19 March 2010 05:16:53PM 9 points [-]

May I suggest "I'm tapping out", perhaps with a link to this very comment? It's a good line (and perhaps one way the dojo metaphor is valuable).

I think in this comment you did fine. Don't sweat it if the comment that signals "I'm stopping here" is downvoted, don't try to avoid it.

In this comment I think you are crossing the "mind reading" line, where you ascribe intent to someone else. Stop before posting those.

Comment author: CannibalSmith 19 March 2010 04:34:08PM 4 points [-]

gg

Comment author: kodos96 20 March 2010 05:52:52AM 1 point [-]

Just curious: who downvoted this, and why? I found it amusing, and actually a pretty decent suggestion. It bothers me that there seems to be an anti-humor bias here... it's been stated that this is justified in order to keep LW from devolving into a social, rather than intellectual forum, and I guess I can understand that... but I don't understand why a comment which is actually germane to the parent's question, but just happens to also be mildly amusing, should warrant a downvote.

Comment author: ata 20 March 2010 07:01:27AM *  3 points [-]

Did the comment say something other than "gg" before? I'm not among those who downvoted it, but I don't know what it means. (I'd love to know why it's "amusing, and actually a pretty decent suggestion".)

Comment author: Matt_Simpson 20 March 2010 08:05:27AM 4 points [-]

"good game"

It's sort of like an e-handshake for online gaming to acknowledge that you have lost the game - at least in the online mtg community.

Comment author: kpreid 20 March 2010 11:07:02AM 1 point [-]

In my experience (most of which is a few years old) it is said afterward, but has its literal meaning, i.e. that you enjoyed the game, not necessarily that you lost it.

Comment author: Sniffnoy 20 March 2010 11:54:12PM 0 points [-]

I think this depends on whether the game is one that's usually played to the end or one where one of the players usually concedes. If it's the latter, "gg" is probably a concession.

Comment author: SoullessAutomaton 20 March 2010 11:01:29PM 0 points [-]

A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).

Comment author: kodos96 20 March 2010 09:12:25AM 0 points [-]

Hmmm... I guess I was engaging in mind projection fallacy in assuming everyone got the reference, and the downvote was for disapproving of it, rather than just not getting it.

Comment author: Nick_Tarleton 22 March 2010 03:41:36PM *  0 points [-]

Maybe someone thought it rude to make a humorous reply to a serious and apparently emotionally loaded question.

Comment author: prase 22 March 2010 03:32:10PM 0 points [-]

I have downvoted it. I had no idea what it meant (before reading the comments). Quick googling doesn't reveal that much.

Comment author: Morendil 20 March 2010 09:34:22AM 0 points [-]

I thought it was too short and obscure. (On KGS we say that at the start of a game. The normal end-of-game ritual is "thx". Or sometimes storming off without a word after a loss, to be lucid.)

Comment author: CannibalSmith 20 March 2010 09:43:06AM 0 points [-]

Explaining it would ruin the funnies. Also, Google. Also, inevitably, somebody else did the job for me.

Comment author: Kevin 19 March 2010 04:09:19PM *  4 points [-]

I've twice intentionally taken ~48 hours away from this site after I said something stupid. Give it a try.

Just leave the conversations hanging; come back days or weeks later if you want. Also, admitting you were wrong goes a long way if you realize you said something that was indeed incorrect, but the rationality police won't come after you if you leave a bad conversation unresolved.

Comment author: mattnewport 19 March 2010 06:43:06PM *  10 points [-]

Interesting article on an Indian rationalist (not quite in the same vein as lesswrong style rationalism but a worthy cause nonetheless). Impressive display of 'putting your money where your mouth is':

Sceptic challenges guru to kill him live on TV

When a famous tantric guru boasted on television that he could kill another man using only his mystical powers, most viewers either gasped in awe or merely nodded unquestioningly. Sanal Edamaruku’s response was different. “Go on then — kill me,” he said.

I also rather liked this response:

When the guru’s initial efforts failed, he accused Mr Edamaruku of praying to gods to protect him. “No, I’m an atheist,” came the response.

H/T Hacker News.

Comment author: Jack 21 March 2010 09:19:10AM *  1 point [-]

As cool as this was there is reason to doubt it's authenticity. There doesn't seem to be any internet record of Pandit Surender Sharma "India's most powerful Tantrik" except for this TV event. Moreover, about a minute in it looks like the tantrik is starting to laugh. Maybe someone who knows the country can tell us if this Pandit Sharma fellow is really a major figure there.

I mean, what possible incentive would the guy have for going on TV to be humiliated?

Comment author: prase 22 March 2010 03:16:03PM 2 points [-]

what possible incentive would the guy have for going on TV to be humiliated?

Perhaps he really believed he could kill the skeptic.

Comment author: FAWS 19 March 2010 06:52:04PM 2 points [-]

Note: Most of the article is not about the TV confrontation so it's well worth reading even if you already heard about that in 2008.

Comment author: pjeby 20 March 2010 12:22:38AM *  3 points [-]

Don't know if this will help with cryonics or not, but it's interesting:

Induced suspended animation and reanimation in mammals (TED Talk by Mark Roth)

[Edited to fix broken link]

Comment author: Vive-ut-Vivas 20 March 2010 01:31:27AM 2 points [-]
Comment author: cousin_it 22 March 2010 02:17:50PM *  4 points [-]

I still fail to see how Bayesian methods eliminate fluke results or publication bias.

Comment author: CannibalSmith 19 March 2010 02:22:56PM 10 points [-]

Are you guys still not tired of trying to shoehorn a reddit into a forum?

Comment author: RobinZ 19 March 2010 02:28:59PM 2 points [-]

I don't understand the question. What are we doing that you describe this way, and why do you expect us to be tired of it?

Comment author: jimmy 19 March 2010 11:40:47PM 2 points [-]

There are a lot of open thread posts, which would be better dealt with on a forum rather than on an open thread with a reddit like system.

Comment author: RobinZ 20 March 2010 12:32:16AM 1 point [-]

You're right, but this isn't supposed to be a forum - I think it's better to make off-topic conversations less convenient. The system seems to work adequately right now.

Comment author: nhamann 20 March 2010 04:18:01AM *  3 points [-]

I suppose you're right in saying that LW isn't supposed to be a forum, but the fact remains that there is a growing trend towards more casual/off-topic/non-rationalism discussion, which seems perfectly fine to me given that we are a community of generally like-minded people. I suspect that it would be preferable to many if LW had better accommodations for these sort of interactions, perhaps something separate from main site so we could cleanly distinguish serious rationalism discussion from off-topic discussion.

Comment author: Matt_Duing 22 March 2010 11:09:07PM 0 points [-]

Perhaps a monthly or quarterly lounge thread could serve this function, provided it does not become too much of a distraction.

Comment author: CronoDAS 19 March 2010 10:20:12PM 0 points [-]

Me neither.

Comment author: Kevin 19 March 2010 03:06:53PM 1 point [-]

I'm tired of it, I'd like to get a real subreddit enabled here as soon as possible.

Comment author: Jack 22 March 2010 11:27:44PM 0 points [-]

We could just start a forum and stop complaining about it.

Comment author: hegemonicon 19 March 2010 01:44:56PM *  4 points [-]

Survey question:

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?

I tend to have trouble with words - it can take me a long time (minutes) to recall the proper word to describe something, and when speaking I frequently have to start a sentence 3 or 4 times to get it to come out right. (I also struggled for a while to replace the word 'automatic' in the above paragraphs with a more accurate description. I was unsuccessful.) Words also don't appear in my head when I'm spelling them aloud, which suggests to me that I might be missing some pathways that connect my language centers to my conscious functions.

Comment author: wedrifid 20 March 2010 12:39:44AM 2 points [-]

If someone asks you how to spell a certain word, does the word appear in your head as you're spelling it out for them, or does it seem to come out of your mouth automatically?

I never see words. I feel them.

If it comes out automatically, would you describe yourself as being adept at language (always finding the right word to describe something, articulating your thoughts easily, etc.) or is it something you struggle with?

Great with syntax. Access to specific words tends to degrade as I get fatigued or stressed. That is, I can 'feel' the word there and know the naunces of the meaning it represents but can not bring the actual sounds or letters to mind.

Comment author: prase 19 March 2010 02:28:40PM *  2 points [-]

I have often troubles with finding proper words, both in English and my native language, but I have no problems with spelling - I can say it automatically. This may be because I have learned English by reading and therefore the words are stored in my memory in their written form, but generally I suspect, from personal experience, that ability to recall spelling and ability to find the proper word are unrelated.

Comment author: jimrandomh 19 March 2010 09:23:36PM *  0 points [-]

I can visualize sentences, paragraphs, or formatted code, but can't zoom in as far as individual words; when I try I get a verbal representation instead. I usually can't read over misspelled words (or wrong words, like its vs. it's) without stopping. When this happens, it feels like hearing someone misparonounce a word.

When spelling a word aloud, it comes out pretty much automatically (verbal memory) with no perceptible intermediate steps. I would describe myself as adept with language.

Comment author: MendelSchmiedekamp 19 March 2010 08:08:36PM 0 points [-]

In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.

As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.

Comment author: Kevin 19 March 2010 06:09:20PM *  0 points [-]

I'm adept at language and I never visualize letters or words in my head. I think in pronounced/internally spoken words, so when I spell something aloud I think the letters to myself as I am saying them.

Comment author: FAWS 19 March 2010 06:06:14PM *  0 points [-]

This is turning interesting:

Sensory type of access to spelling information by poster:
hegemonicon: verbal (?) ( visual only with great difficulty)
Hook: mechanical
FAWS: mechanical, visual
prase: verbal (???)
NancyLebovitz: visual
Morendil: visual
mattnewport: visual, mechanical (?)
Rain: mechanical (???)
Kevin: verbal (???) (never visual)

Is there anyone who doesn't fall into at least one of those three categories?

Comment author: Rain 19 March 2010 05:47:41PM *  0 points [-]

When I spell out a word, I don't visualize anything. Using words in conversation, typing, or writing is also innate - they flow through without touching my consciousness. This is another aspect of my maxim, "my subconscious is way smarter than I am." It responds quickly and accurately, at any rate.

I consider myself to be adept at the English language, and more objective evidence bears that out. I scored 36/36 on the English portion of the ACT, managed to accumulate enough extra credit through correcting my professor in my college level writing class that I didn't need to take the final, and many people have told me that I write very well in many different contexts (collaborative fiction, business reports, online forums, etc.).

I would go so far as to say that if I make an effort to improve on my communication by the use of conscious thought, I do worse than when I "feel it out."

Comment author: mattnewport 19 March 2010 05:10:48PM 0 points [-]

I have pretty good language skills and I think I am above average at both spelling in my own writing and spotting spelling mistakes when reading but I do not find it particularly easy to spell a difficult word out loud, it is a relatively effortful process unlike the process when reading or writing which is largely automatic and effortless. With longer words I feel like short term memory limitations make it difficult to spell the word out, for a difficult word I try to visualize the text and 'read off' the spelling but that can be taxing for longer words. I may end up having to write it down in order to be sure the spelling is correct and to be able to read it out.

Growing up in England I was largely unaware of the concept of a spelling bee so this is not a skill I ever practiced to any great extent.

Comment author: Morendil 19 March 2010 04:42:43PM *  0 points [-]

My experience of spelling words is quite visual (in contrast to my normal thinking style, which suggests that if "thinking styles" exist they are not monolithic), I literally have the visual representation of the word floating in my head. (I can tell it really is visual because I can give details, such as what kind of font - serif - or what color - black - they are the words as they'd appear in a book.)

I'd also describe my spelling skill as "automatic", i.e. I can usually instantly spot whether a word is "right" or "not right", I absolutely cannot stand misspellings (including mine, I have the hardest time when writing fast because I must instantly go back and correct any typos, rather than let them be), and they tend to leap out of the page; most people appear to have an ability to ignore typos that I lack. (For instance, I often get a kick out of spotting typos on the freakin' front page of national magazines, and when I point them out I mostly get blank stares or "Oh, you're right" - people just don't notice!)

I'd self-describe as adept at language.

(ETA: upvoted for a luminous question.)

Comment author: hegemonicon 19 March 2010 05:25:52PM 1 point [-]

After a bit of self-experimentation, I've concluded that I almost (but not quite) completely lack any visual experience accompanying anything verbal. Even when I self-prompt, telling myself to spell a word, nothing really appears by default (though I can make an image of the word appear with a bit of focus, it's very difficult to try to 'read' off of it).

I wonder how typical (or atypical) this is.

Comment author: h-H 20 March 2010 08:32:59PM 0 points [-]

quite typical.

Comment author: NancyLebovitz 19 March 2010 05:43:18PM 0 points [-]

Do you get any visual images when you read?

Comment author: Liron 19 March 2010 08:40:59AM 9 points [-]

Startup idea:

We've all been waiting for the next big thing to come after Chatroulette, right? I think live video is going to be huge -- it's a whole new social platform.

So the idea is: Instant Audience. Pay $1, get a live video audience of 10 people for 1 minute. The value prop is attention.

The site probably consists of a big live video feed of the performer, and then 10 little video feeds for the audience. The audience members can't speak unless they're called on by the performer, and they can be "brought up on stage" as well.

For the performer, it's a chance to practice your speech / stand-up comedy routine / song, talk about yourself, ask people questions, lead a discussion, or limitless other possibilities (ok we are probably gonna have to deal with some suicides and jackers off).

For the audience, it's a free live YouTube. It's like going to the theater instead of watching TV, but you can still channel surf. It's a new kind of live entertainment with great audience participation.

Better yet, you can create value by holding some audience members to higher standards of behavior. There can be a reputation system, and maybe you can attend free performances to build up your Karma (by giving useful feedack without whipping it out), then companies pay more for focus groups with high-Karma users.

Apparently businesses shell out tons for focus groups; we're talking free iPod touch per person per couple hours.

I think the biggest implementation challenge is gonna be constantly having to test live video with lots of simultaneous users. But it might be worth it if you want to ride the live video web wave. There are companies who would love to run video focus groups for at least $1 per person-minute.

Comment author: mattnewport 19 March 2010 03:17:23PM 2 points [-]

I don't think you should charge a fixed rate per person. An auction or market would be a better way to set pricing, something like Amazon's Mechanical Turk or the Google adwords auctions.

Comment author: Kevin 19 March 2010 08:46:49AM *  2 points [-]

I give it a solid "that could work" but the business operations are non-trivial. You probably would need someone with serious B2B sales experience, ideally already connected with the NYC-area focus group/marketing community.

Comment author: JamesAndrix 19 March 2010 02:28:39PM 2 points [-]

I have a line of thinking that makes me less worried about unfriendly AI. The smarter an AI gets, the more it is able to follow its utility function. Where the utility function is simple or the AI is stupid, we have useful things like game opponents.

But as we give smarter AI's interesting 'real world' problems, the difference between what we asked for and what we want shows up more explicitly. Developers usually interpret this as the AI being stupid or broken, and patch over either the utility function or the reasoning it led to. These patches don't lead to extra intelligence because that would just make the AI appear more broken.

If it is the case that there is a big gap between AI's that are smart enough for their non-human utility functions to be annoyingly evident and AI's that are smart enough to improve themselves, then non-friendly AGI research will hit an apparent wall where all avenues are unproductive. (If This line of thought is correct, it hit that wall a long time ago.)

This is not a guarantee. There is always a chance someone will hit on a self-improving system.

Comment author: Vladimir_Nesov 19 March 2010 05:02:40PM *  2 points [-]

Where the utility function is simple or the AI is stupid, we have useful things like game opponents.

Rather, where the utility function is simple AND the program is stupid. Paperclippers are not useful things.

If it is the case that there is a big gap between AI's that are smart enough for their non-human utility functions to be annoyingly evident and AI's that are smart enough to improve themselves, then non-friendly AGI research will hit an apparent wall where all avenues are unproductive.

Reinforcement-based utility definition plus difficult games with well-defined winning conditions seems to constitute a counterexample to this principle (a way of doing AI that won't hit the wall you described). This could function even on top of supplemental ad-hoc utility function building, as in chess, where a partially hand-crafted utility function over specific board positions is an important component of chess-playing programs -- you'd just need to push the effort to a "meta-AI" that is only interested in the real winning conditions.

Comment author: JamesAndrix 19 March 2010 07:41:55PM 1 point [-]

Rather, where the utility function is simple AND the program is stupid. Paperclippers are not useful things.

I was thinking of current top chess programs as smart(well above average humans), with simple utility functions.

Reinforcement-based utility definition plus difficult games with well-defined winning conditions seems to constitute a counterexample to this principle (a way of doing AI that won't hit the wall you described).

This is a good example, but it might not completely explain it away.

Can we, by hand or by algorithm, construct a utility function that does what we want, even when we know exactly what we want?

I think you could still have a situation in which a smarter agent does worse because it's learned utility function does not match the winning conditions (it's learned utility function would constitute a created subgoal of "maximize reward")

Learning about the world and constructing subgoals would probably be part of any near-human AI. I don't think we have a way to construct reliable subgoals, even with a rules-defined supergoal and perfect knowledge of the world. (such a process would be a huge boon for FAI)

Likewise, I don't think we can be certain that the utility functions we create by hand would reliably lead a high-intelligence AI to seek the goal we want, even for well-defined tasks.

A smarter agent might have the advantage of learning the winning conditions faster, but if it is comparatively better at implementing a flawed utility function than it is at fixing it's utility function, then could be outpaced by stupider versions, and you're working more in an evolutionary design space.

So I think it would hit the same kind of wall, at least in some games.

Comment author: Vladimir_Nesov 19 March 2010 08:29:47PM 0 points [-]

I meant the AI to be limited to the formal game universe, which should be easily feasible for non-superintelligent AIs. In this case, smarter agents always have an advantage, maximization of reward is the same as the intended goal.

A smarter agent might have the advantage of learning the winning conditions faster, but if it is comparatively better at implementing a flawed utility function than it is at fixing it's utility function, then could be outpaced by stupider versions, and you're working more in an evolutionary design space.

Thinking deeply until you get eaten by a sabertooth is not smart.

Comment author: JamesAndrix 19 March 2010 11:43:13PM 0 points [-]

Answer is here, thinking out loud is below

If you give the AI a perfect utility function for a game, it still has to break down subgoals and seek those. You don't have a good general theory for making sure your generated subgoals actually serve your supergoals, but you've tweaked things enough that it's actually very good at achieving the 'midlevel' things.

When you give it something more complex, it improperly breaks down the goal into faulty subgoals that are ineffective or contradictory, and then effectively carries out each of them. This yields a mess.

At this point you could improve some of the low level goal-achievement and do much better at a range of low level tasks, but this wouldn't buy you much in the complex tasks, and might just send you further off track.

If you understand that the complex subgoals are faulty, you might be able to re-patch it, but this might not help you solve different problems of similar complexity, let alone more complex problems.

What led me to this answer:

Thinking deeply until you get eaten by a sabertooth is not smart.

There may not be a trade off at play here. For example: At each turn you give the AI indefinite time and memory to learn all it can from the information it has so far, and to plan. (limted by your patience and budget, but let's handwave that computation resources are cheap, and every turn the AI comes in well below it's resource limit.)

You have a fairly good move optimizer that can achieve a wide range of in game goals, and a reward modeler that tries to learn what it is supposed to do and updates the utility function.

I meant the AI to be limited to the formal game universe, which should be easily feasible for non-superintelligent AIs. In this case, smarter agents always have an advantage, maximization of reward is the same as the intended goal.

But how do they know how to maximize reward? I was assuming they have to learn the reward criteria. If they have a flawed concept of that criteria, they will seek non-reward.

If the utility function is one and the same as winning, then the (see Top)

Comment author: Vladimir_Nesov 20 March 2010 10:10:55AM 0 points [-]

End-of-conversation status:

I don't see a clear argument, and failing that, I can't take confidence in a clear lawful conclusion (AGI hits a wall). I don't think this line of inquiry is worthwhile.

Comment author: Psy-Kosh 19 March 2010 05:01:07AM 4 points [-]

Might as well right away move/call attention to the thing about the macroscopic quantum superposition here so we talk about that here.

Comment author: bogdanb 19 March 2010 06:43:25AM *  2 points [-]

I was wondering: Would something like this be expected to have any kind of visible effect?

(Their object is at the limit of bare-eye visibility in favorable lighting,* but suppose that they can expand their results by a couple orders of magnitude.)

From “first principles” I’d expect that the light needed to actually look at the thing would collapse the superposition (in the sense of first entangling the viewer with the object, so as to perceive a single version of it in every branch, and then with the rest of the universe, so each world-branch would contain just a “classical” observation).

But then again one can see interference patterns with diffracted laser light, and I’m confused about the distinction.

[eta:] For example, would coherent light excite the object enough to break the superposition, or can it be used to exhibit, say, different diffraction patterns when diffracted on different superpositions of the object?

[eta2:] Another example: it the object’s wave-function has zero amplitude over a large enough volume, you should be able to shine light through that volume just as through empty space (or even send another barely-macroscopic object through). I can’t think of any configuration where this distinguishes between the superposition and simply the object being (classically) somewhere else, though; does anyone?

(IIRC, their resonator’s size was cited as “about a billion atoms”, which turns out as a cube with .02µm sides for silicon; when bright light is shined at a happy angle, depending on the background, and especially if the thing is not cubical, you might just barely see it as a tiny speck. With an optical microscope (not bare-eyes, but still more intuitive than a computer screen) you might even make out its approximate shape. I used to play with an atomic-force microscope in college: the cantilever was about 50µm, and I could see it with ease; I don’t remember ever having seen the tip itself, which was about the scale we’re talking about, but it might have been just barely possible with better viewing conditions.)

Comment author: Mitchell_Porter 20 March 2010 09:35:26AM 1 point [-]

Luboš Motl writes: "it's hard to look at it while keeping the temperature at 20 nanokelvin - light is pretty warm."

My quick impression of how this works:

You have a circuit with electrons flowing in it (picture). At one end of the circuit is a loop (Josephson junction) which sensitizes the electron wavefunctions to the presence of magnetic field lines passing through the loop. So they can be induced into superpositions - but they're just electrons. At the other end of the circuit, there's a place where the wire has a dangly hairpin-shaped bend in it. This is the resonator; it expands in response to voltage.

So we have a circuit in which a flux detector and a mechanical resonator are coupled. The events in the circuit are modulated at both ends - by passing flux through the detector and by beaming microwaves at the resonator. But the quantum measurements are taken only at the flux detector site. The resonator's behavior is inferred indirectly, by its effects on the quantum states in the flux detector to which it is coupled.

The quantum states of the resonator are quantized oscillations (phonons). A classical oscillation consists of something moving back and forth between two extremes. In a quantum oscillation, you have a number of wave packets (peaks in the wavefunction) strung out between the two extremal positions; the higher the energy of the oscillation, the greater the number of peaks. Theoretically, such states are superpositions of every classical position between the two extremes. This discussion suggests how the appearance of classical oscillation emerges from the distribution of peaks.

So you should imagine that the little hairpin-bend part of the circuit is getting into superpositions like that, in which the elements of the superposition differ by the elongation of the hairpin; and then this is all coupled to electrons in the loop at the other end of the circuit.

I think this is all quite relevant for quantum biology (e.g. proteins in superposition), where you might expect to see a coupling between current (movement of electrons) and conformation (mechanical vibration).

Comment author: Nick_Tarleton 19 March 2010 04:36:55PM 1 point [-]

IIRC, their resonator’s size was cited as “about a billion atoms”, which turns out as a cube with .02µm sides for silicon

Every source I've seen (e.g.) gives the resonator as flat, some tens of µm long, and containing ~a trillion atoms.

Comment author: Kevin 19 March 2010 08:26:35AM 2 points [-]

First Clay Millennium Prize goes to Grigoriy Perelman

http://news.ycombinator.com/item?id=1202591

Comment author: NancyLebovitz 19 March 2010 09:14:39AM 1 point [-]

Is independent AI research likely to continue to be legal?

At this point, very few people take the risks seriously, but that may not continue forever.

This doesn't mean that it would be a good idea for the government to decide who may do AI research and with what precautions, just that it's a possibility

If there's a plausible risk, is there anything specific SIAI and/or LessWrongers should be doing now, or is building general capacity by working to increase ability to argue and to live well (both the anti-akrasia work and luminosity) the best path?

Comment author: khafra 19 March 2010 01:34:26PM 4 points [-]

Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I've never seen AI research fictionally outlawed until it's done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.

Comment author: SilasBarta 19 March 2010 02:10:56PM 4 points [-]

Fictional evidence should be avoided. Also, this subject seems very prime for a moral panic, i.e., "these guys are making Terminator".

Comment author: h-H 20 March 2010 08:41:38PM 1 point [-]

how would it be stopped if it were illegal? unless information tech suddenly goes away it's impossible.

Comment author: khafra 21 March 2010 10:01:44PM 2 points [-]

NancyLebovitz wasn't suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does--preparation would be warranted if such legislation were likely. I don't think it's likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI's primary goals.

Comment author: NancyLebovitz 21 March 2010 11:12:26PM 2 points [-]

Bingo. That's exactly what I was concerned about.

You're probably right that there's no practical thing to be done now. I'm sure you're know very quickly if restrictions on independent AI research are being considered.

The more I think about it, the more I think a specialized self-optimizing AI (or several such, competing with each other) could do real damage to the financial markets, but I don't know if there are precautions for that one.

Comment author: NancyLebovitz 19 March 2010 02:30:49PM 3 points [-]

I've been thinking about that, and I believe you're right that laws typically don't get passed against hypothetical harms, and also that AI research isn't the kind of thing that's enough fun to think about to set off a moral panic.

However, I'm not sure whether real harm that society can recover from is a possibility.

I'm basing the possibility on two premises-- that a lot of people thinking about AI aren't as concerned about the risks as SIAI, and computer programs are frequently gotten to the point where they work somewhat.

Suppose that a self-improving AI breaks the financial markets-- there might just be efforts to protect the markets, or AI might be an issue in itself.

Comment author: cousin_it 22 March 2010 02:35:18PM *  1 point [-]

laws typically don't get passed against hypothetical harms

Witchcraft? Labeling of GM food?

Comment author: NancyLebovitz 22 March 2010 03:04:01PM 2 points [-]

Those are legitimate examples. I think overreaction to rare events (like the difficulties added to travel and the damage to the rights of suspects after 9/11) is more common, but I can't prove it.

Comment author: RobinZ 22 March 2010 02:49:32PM *  0 points [-]

Some kinds of GM food cause different allergic reactions than their ancestral cultivars. I think you can justifiably care to a similar extent as you care about the difference between a Gala apple and a Golden Delicious apple.

Edit: Granted, most of the reaction is very much overblown.

Comment author: wedrifid 31 March 2010 05:22:55AM 0 points [-]

Random observation: type in the first few letters of 'epistemic' and google goes straight to suggesting 'epistemological anarchism'. It seems google is right on board with helping SMBC further philosophical education.

Comment author: Kevin 30 March 2010 10:31:15AM *  0 points [-]
Comment author: alexflint 29 March 2010 10:34:27PM 0 points [-]

Does anyone know which arguments have been made about ETA of strong AI on the scale of "is it more likely to be 30, 100, or 300 years?"

Comment author: Kevin 29 March 2010 12:00:19AM 0 points [-]
Comment author: Kevin 28 March 2010 11:15:25AM 0 points [-]

Michael Arrington: "It’s time for a centralized, well organized place for anonymous mass defamation on the Internet. Scary? Yes. But it’s coming nonetheless."

http://techcrunch.com/2010/03/28/reputation-is-dead-its-time-to-overlook-our-indiscretions/

Comment author: Jack 28 March 2010 12:24:12PM 0 points [-]

Meh. I think Arrington and this company are overestimating the market. JuicyCampus went out of business for a reason and they had the advantage of actually targeting existing social scenes instead of isolated individuals. Here is how my campus's juicy campus page looked over time (apologies for crudeness):

omg! we have a juicycampus page! Who is the hottest girl? (Some names) Who is the hottest guy? (Some names) Alex Soandso is a slut! Shut up Alex is awesome and you have no friends! Who are the biggest players? (Some names) Who had sex with a professor? (No names) Who is the hottest professor? (A few names) Who has the biggest penis?

... This lasted for about a week. The remaining six months until JuicyCampus shut down consisted of parodies about how awesome Steve Holt is and awkward threads obviously contrived by employees of Juicy Campus trying to get students to keep talking.

Because these things are uncensored the signal to noise ratio is just impossible to deal with. Plus for this to be effective you would have to be routinely looking up everyone you know. I guess you could have accounts that tracked everyone you knew... but are you really going to show up on a regular basis just to check? It does look like some of these gossip sites have been successful with high schools but those are far more insular and far more gossip-centered places than the rest of the world.

Comment author: Kevin 29 March 2010 02:14:34AM *  0 points [-]

I'll be very surprised if this particular company is a success, but I don't think it's an impossible problem and I think there is probably some sort of a business execution/insight that could make such a company a very successful startup.

The successful versions of companies in this space will look a lot more like reputational economies and alternative currencies than marketplaces for anonymous libel like JuicyCampus.

Comment author: [deleted] 27 March 2010 06:40:22AM 0 points [-]

So, while in the shower, an idea for an FAI came into my head.

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs. So, I figured that if you simply told two (or more) AGIs to fight over one database of information, the most rational AGI would be able to set the database to contain the correct information. (Another intuition of mine tells me that FAI is a problem of rationality: once you have a rational AGI, you can just feed it CEV or whatever.)

Of course, for this to work, two things would have to happen: one of the AGIs would have to be intelligent enough to discover the rational conclusions, and no AGI could be so much smarter than the others that it could find tons of evidence in favor of its pet truths and have the database favor them despite that they're false.

So, I don't think this will work very well. At least I came to it by despairing about how not everybody has an infinite amount of money and yet values it anyway, thereby making our economic system perfect!

Comment author: ata 27 March 2010 07:17:46AM *  1 point [-]

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs.

That... doesn't sound right at all. It does sound like how people intuitively think about proof/reasoning (even people smart enough to be thinking about things like, say, set theory, trying to overturn Cantor's diagonal argument with a counterexample without actually discovering a flaw in the theorem), and how we think about debates (the guy on the left half of the screen says something, the guy on the right says the opposite, and they go back and forth taking turns making Valid Points until the CNN anchor says "We'll have to leave it there" and the viewers are left agreeing with (1) whoever agreed with their existing beliefs, or, if neither, (2) whoever spoke last). But even if our current formal understanding of reasoning is incomplete, we know it's not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren't fighting each other, they're collaborating on determining what your map should look like and how confident you should be.

Of course, for this to work . . . no AGI could be so much smarter than the others that it could find tons of evidence in favor of its pet truths and have the database favor them despite that they're false.

Why would we build AGI to have "pet truths", to engage in rationalization rather than rationality, in the first place?

Comment author: [deleted] 28 March 2010 01:29:30AM 1 point [-]

But even if our current formal understanding of reasoning is incomplete, we know it's not going to resemble that. Yes, Bayesian updating will cause your probability estimates to fluctuate up and down a bit as you acquire more evidence, but the pieces of evidence aren't fighting each other, they're collaborating on determining what your map should look like and how confident you should be.

Yeah. So if one guy presents only evidence in favor, and the other guy presents only evidence against, they're adversaries. One guy can state a theory, show that all existing evidence supports it, and thereby have "proved" it, and then the other guy can state an even better theory, also supported by all the evidence but simpler, thereby overturning that proof.

Why would we build AGI to have "pet truths", to engage in rationalization rather than rationality, in the first place?

We wouldn't do it on purpose!

Comment author: bogus 27 March 2010 11:14:05AM *  0 points [-]

My intuition tells me that if we manage to entirely formalize correct reasoning, the result will have a sort of adversarial quality: you can "prove" statements, but these proofs can be overturned by stronger disproofs.

Game semantics works somewhat like this; a proof is formalized as an "argument" between a Proponent and an Opponent. If an extension of game semantics to probabilistic reasoning exists, it will work much like the 'theory of uncertain arguments' you mention here.

Comment author: [deleted] 27 March 2010 06:52:46AM 0 points [-]

I seem to have man-with-a-hammer syndrome, and my hammer is economics. Luckily, I'm using economics as a tool for designing stuff, not for understanding stuff; there is no One True Design the way there's a One True Truth.

Comment author: Kevin 25 March 2010 01:39:55PM 0 points [-]

Could use some comment thread ringers here: http://news.ycombinator.com/item?id=1218075

Comment author: [deleted] 25 March 2010 04:18:17AM 0 points [-]

This is what non-reductionism looks like:

In a certain world, it's possible to build stuff. For example, you can build a ship. You build it out of some ingredients, such as wood, and by doing a bit of work. The thing is, though, there's only one general method that can possibly used to build a ship, and there are some things you can do that are useful only for building a ship. You have some freedom within this method: for example, you can give your ship 18 masts if you want to. However, the way you build the ship has literally nothing to do with the end result; whether you put 18 masts on a ship or none, it will end up with precisely the correct number. Any variation on the ship-building process gives you either exactly the same sort of ship or nothing at all.

If I lived in this world, I would conclude that a ship simply isn't made up of parts.

Comment author: Strange7 23 March 2010 01:04:30AM 0 points [-]

Let's say Omega opens a consulting service, but, for whatever reason, has sharply limited bandwidth, and insists that the order in which questions are presented be determined by some sort of bidding process. What questions would you ask, and how much would you be willing to pay per byte for the combined question and response?

Comment author: Sly 24 March 2010 09:34:20PM 0 points [-]

How many know about this, and are games such as the lottery, and sports betting still viable?

Lottery numbers / stock changes seem like the first impression answer to me.

Comment author: Strange7 24 March 2010 09:54:06PM 0 points [-]

It's public knowledge. Omega is extraordinarily intelligent, but not actually omniscient, and 'I don't know' is a legitimate answer, so casinos, state lotteries, and so on would pay exorbitant amounts for a random-number generator that couldn't be cost-effectively predicted. Sports oddsmakers and derivative brokers, likewise, would take the possibility of Omega's advice into account.

Comment author: Strange7 22 March 2010 10:27:32PM 0 points [-]

Fictional representation of an artificial intelligence which does not value self-preservation., and the logical consequences thereof.

Comment author: RobinZ 22 March 2010 09:18:35PM 0 points [-]

This will be completely familiar to most of us here, but "What Does a Robot Want?" seems to rederive a few of Eliezer's comments about FAI and UFAI in a very readable way - particularly those from Points of Departure. (Which, for some reason, doesn't seem to be included in any indexed sequence.)

The author mentions using these ideas in his novel, Free Radical - I can attest to this, having enjoyed it partly for that reason.

Comment author: Thomas 22 March 2010 07:56:39PM 0 points [-]

People gathering here, mostly assume that the evolution is slow and stupid, no match for intelligence at all. That the human, let alone superintelligence is for several orders of magnitude smarter than the process which created us in recent several billion years.

Well, despite many fancy mathematical theories of packing, some best results came from the so called digital evolution. Where the only knowledge is, that "the overlapping is bad and a smaller frame is good". Everything else is a random change and nonrandom selection.

Every previously intelligently developed solution, stupidly evolves fast from scratch here: http://critticall.com/SQU_cir.html

Comment author: Kevin 22 March 2010 11:08:30AM 0 points [-]

Does anyone have any spare money on In Trade? The new Osama Bin Laden contract is coming out and I would like to buy some. If anyone has some money on In Trade, I would pay a 10% premium.

Also, is there anyone here who thinks the In Trade Osama contracts are priced too highly? http://www.intrade.com/jsp/intrade/contractSearch/index.jsp?query=Osama+Bin+Laden+Conclusion

Comment author: Nisan 22 March 2010 07:25:38AM 0 points [-]

Here's a puzzle that involves time travel:

Suppose you have just built a machine that allows you to see one day into the future. Suppose also that you are firmly committed to realizing the particular future that the machine will show you. So if you see that the lights in your workshop are on tomorrow, you will make sure to leave them on; if they are off, you will make sure to leave them off. If you find the furniture rearranged, you will rearrange the furniture. If there is a cow in your workshop, you will spend the next 24 hours getting a cow into your workshop.

My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?

Comment author: Eliezer_Yudkowsky 22 March 2010 07:49:46AM 7 points [-]

Can't answer until I know the laws of time travel.

No, seriously. Is the resulting universe randomly selected from all possible self-consistent ones? By what weighting? Does the resulting universe look like the result of iteration until a stable point is reached? And what about quantum branching?

Considering that all I know of causality and reality calls for non-circular causal graphs, I do feel a bit of justification in refusing to just hand out an answer.

Comment author: cousin_it 22 March 2010 02:30:28PM *  1 point [-]

Can't answer until I know the laws of time travel.

Why is something like this an acceptable answer here, but not in Newcomb's Problem or Counterfactual Mugging?

Comment author: Vladimir_Nesov 22 March 2010 04:16:29PM 2 points [-]

Because it's clear what the intended clarification of these experiments is, but less so for time travel. When the thought experiments are posed, the goal is not to find the answer to some question, but to understand the described situation, which might as well involve additionally specifying it.

Comment author: Nick_Tarleton 22 March 2010 03:06:18PM *  1 point [-]

I can't imagine what you would want to know more about before giving an answer to Newcomb. Do you think Omega would have no choice but to use time travel?

Comment author: cousin_it 22 March 2010 04:02:46PM *  1 point [-]

No, but the mechanism Omega uses to predict my answer may be relevant to solving the problem. I have an old post about that. Also see the comment by Toby Ord there.

Comment author: Morendil 22 March 2010 03:03:04PM 0 points [-]

Because these don't involve time travel, but normal physics?

Comment author: Nick_Tarleton 22 March 2010 03:05:50PM 0 points [-]

He did say "something like this", not "this".

Comment author: Nisan 22 March 2010 10:33:52PM 0 points [-]

I could tell you that time travel works by exploiting closed time-like curves in general relativity, and that quantum effects haven't been tested yet. But yes, that wouldn't be telling you how to handle probabilities.

So, it looks like this is a situation where the prior you were born with is as good as any other.

Comment author: Alicorn 22 March 2010 07:52:16AM 3 points [-]

Why am I firmly committed to realizing the future the machine shows? Do I believe that to be contrary would cause a paradox and explode the universe? Do I believe that I am destined to achieve whatever is foretold, and that it'll be more pleasant if I do it on purpose instead of forcing fate to jury-rig something at the last minute? Do I think that it is only good and right that I do those things which are depicted, because it shows the locally best of all possible worlds?

In other words, what do I hypothetically think would happen if I weren't fully committed to realizing the future shown?

Comment author: Sniffnoy 22 March 2010 08:53:47AM 1 point [-]

Agree with the question of why you would be doing this; sounds like optimizing on the wrong thing. Supposing that it showed me having won the lottery and having a cow in my workshop, it seems silly to suppose that bringing a cow into my workshop will help me win the lottery. We can't very well suppose that we were always wanting to have a cow in our workshop, else the vision of the future wouldn't affect anything.

Comment author: Nisan 22 March 2010 10:05:36PM 0 points [-]

I stipulated that you're committed to realizing the future because otherwise, the problem would be too easy.

I'm assuming that if you act contrary to what you see in the machine, fate will intervene. So if you're committed to being contrary, we know something is going to occur to frustrate your efforts. Most likely, some emergency is going to occur soon which will keep you away from your workshop for the next 24 hours. This knowledge alone is a prior for what the future will hold.

Comment author: wedrifid 22 March 2010 08:14:44AM 1 point [-]

My question is this: What is your prior probability for any observation you can make with this machine? For example, what are the odds of the windows being open?

Depends on the details of the counter-factual science. Does not depend on my firm commitment.

Comment author: Nisan 22 March 2010 10:22:47PM 0 points [-]

I was thinking of a closed time-like curve governed by general relativity, but I don't think that tells you anything. It should depend on your commitment, though.

Comment author: mattnewport 22 March 2010 06:17:03AM 0 points [-]

So healthcare passed. I guess that means the US goes bankrupt a bit sooner than I'd expected. Is that a good or a bad thing?

Comment author: Kevin 22 March 2010 06:27:58AM 1 point [-]

I think you're being overly dramatic.

Nate Silver has some good numerical analysis here: http://www.fivethirtyeight.com/2009/12/why-progressives-are-batshit-crazy-to.html

I don't think that US government debt has much connection to reality any more. The international macroeconomy wizards seem to make things work. Given their track record, I am confident that the financial wizards can continue to make a fundamentally unsustainable balance sheet sustainable, at least until the Singularity.

So I think that the marginal increase in debt from the bill is a smaller risk to the stability of the USA than maintaining the very flawed status quo of healthcare in the USA.

Useful question: When does the bill go into effect? My parent's insurance is kicking me off at the end of the month and it will be nice to be able to stay on it for a few more years.