Open Thread: February 2010

1 Post author: wedrifid 01 February 2010 06:09AM

Where are the new monthly threads when I need them? A pox on the +11 EDT zone!

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

If you're new to Less Wrong, check out this welcome post.

Comments (738)

Sort By: Controversial
Comment author: [deleted] 12 February 2010 03:15:32AM 1 point [-]

Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.

May I?

Comment author: Kevin 12 February 2010 04:37:56AM *  2 points [-]

I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you're so astronomically unlikely to succeed that it doesn't matter.

Comment author: orthonormal 15 February 2010 12:51:31AM 2 points [-]

I second Kevin: the nearest analogy that occurs to me is playing "kick the landmine" when the landmine is almost surely a dud.

Comment author: JGWeissman 15 February 2010 01:39:32AM 2 points [-]

Of course, the advantage of "kick the landmine" is that you don't take the rest of the world out in case it wasn't a dud.

Comment author: thomblake 12 February 2010 02:01:33PM -1 points [-]

Sounds fun. Though so far we don't have anything that you can "teach" in a general way.

Comment author: ciphergoth 12 February 2010 10:31:04AM 1 point [-]

What on Earth? When you say "may I" you presumably mean "is this a good idea" since obviously we're not in a position to stop you. But you're already aware of the arguments why it isn't a good idea and you don't address them here, so it's not clear that you have a good purpose for this comment in mind.

Comment author: byrnema 12 February 2010 01:22:55PM *  2 points [-]

I interpreted as akin to a call to a suicide hot-line.

'This is sounding like a good idea...'

(Can you help / talk me out of it?)

If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won't make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?

Comment author: ciphergoth 12 February 2010 02:05:48PM 2 points [-]

I don't think it's fear of knowledge that leads me to suggest you don't try to build a catapult to twang yourself into a tree.

Comment author: rolf_nelson 01 February 2010 08:02:28AM 4 points [-]

I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:

http://docs.google.com/View?id=dgb3jmh2_5hj95vzgk

Comment author: Vive-ut-Vivas 01 February 2010 04:53:26PM *  3 points [-]

Criticizing komponisto for citing "Friends of Amanda Knox" while you yourself cite "True Justice" causes those criticisms to fall flat.

Unfortunately, I find that your essay is wading into Dark Arts territory, since its intent is to show that komponisto's original essay was "misleading", and that that would somehow give veracity to arguments of Amanda Knox's guilt. Using that same logic, one would have to consider the implications of the chief prosecutor in Amanda Knox's case being convicted of abuse of office in another murder trial.

However, I would be interested in seeing komponisto and rolf nelson discuss the actual details of the case; in particular, the points that rolf nelson brought up in the essay.

Comment author: rolf_nelson 01 February 2010 07:50:32PM -1 points [-]

Re: dark arts territory, I agree completely. This criticism should be directed more strongly to komponisto. My intent here is merely to repair some of the Bayesian damage caused by komponisto's original post. Perhaps this will dissuade people from wandering into dark arts territory in the future, or at least to wander in with misleading claims.

Comment author: Vive-ut-Vivas 01 February 2010 09:52:31PM 5 points [-]

My intent here is merely to repair some of the Bayesian damage caused by komponisto's original post.

I hardly think komponisto inflicted "Bayesian damage" on the members of Less Wrong, seeing as they had already overwhelmingly come to the conclusion that Amanda Knox was not guilty before he had even presented his own arguments.

Comment author: rolf_nelson 01 February 2010 07:47:48PM -1 points [-]

I said once in the doc that 'truejustice claims that X'. Because I said 'truejustice claims that X' rather than just stating X as though it were uncontested fact, and because X is basically correct, I claim that my doc is not misleading. If X is untrue, that would be a different story. In other words, if komponisto cited FoA and FoA's claims were true, I would not accuse him of being misleading.

Comment author: Liron 01 February 2010 07:06:39AM 2 points [-]

Mind-killing taboo topic that it is, I'd like to have a comment thread about LW readers' thoughts about US politics.

Comment author: Liron 01 February 2010 07:10:51AM -1 points [-]

Thoughts on Democrats and Republicans?

My impression is that Democrats have much more intellectually honest, serious public discourse, although that's not saying much.

Comment author: Liron 01 February 2010 07:07:53AM -1 points [-]

What good things can be said about G. W. Bush?

Comment author: Jayson_Virissimo 01 February 2010 05:46:38PM 0 points [-]

He didn't increase the projected level of debt for the US as much as the current president.

Comment author: wedrifid 01 February 2010 07:28:37AM *  8 points [-]

He (Dubya) raised the self esteem of millions of foreign citizens. Being able to laugh at the expense of the leader of a dominant world power gives significant health benefits.

Comment author: Liron 01 February 2010 07:11:57AM 0 points [-]

What do you think President Obama should focus on? And do you think he has been squandering the bully pulpit?

Comment author: ChristianKl 01 February 2010 11:33:11AM 0 points [-]

I honestly don't really understand the question. A president should be able to push several different agendas at the same time.

Comment author: Larks 01 February 2010 02:31:49PM 5 points [-]

I disagree; discovering that someone holds political views opposed to yours can inhibit your ability to rational consider arguments; arguments become soldiers, etc.,

Besides, I think the survey from ages ago showed the general spread of political views, and I doubt much has changed since. For discussing particular issues, there are other places available, and it may be that only by not discussing hot topics can we keep the barriers to entry up that keep the LW membership productive.

Comment author: magfrump 01 February 2010 06:45:10PM 0 points [-]

survey link?

Comment author: Mitchell_Porter 14 February 2010 11:06:06AM 1 point [-]

I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.

The truth is that I don't remember a thing of what he says in the book. I might look it up tomorrow and see if I am reminded of any specific reactions I had. From what I remember of his outlook, I don't think it is an unusual one for a philosophically minded theoretical physicist. The sensibility of theoretical physics is a problematic mixture of materialism and platonism. On one hand, you can break everything down to fields, particles, space and time, in an amazingly precise way. On the other hand, your worldview has these entities in it like "physical laws" and "fundamental equations", and there's also those basic questions like, why does anything exist, and why is it like this rather than some other way. So your materialist physics is haunted by a mathematical metaphysics, and this gives rise to a certain sort of musing.

I have my own attitude to these issues. I don't have an answer at all to why the universe exists, but I think we can first take an extra step forward in understanding what exists, and after we have taken that step, we can look again at the first-cause problem and see if it looks any different. We already took a big step in the past when modern physics was invented. We went from everyday conceptual consciousness to a highly mathematical and objectified view of reality. Everyday consciousness is still there in the background but now there is the idea of reality as nothing but fundamental physical objects in interaction, backed up by experimental and technological success. But now consciousness itself is a conceptual problem. We understand it has something to do with the brain, and we have all sorts of metaphors (e.g. brain is computer, mind is program) and anatomical results (your visual neurons fire when you see things), but there is still a fundamental disconnect between subjective and objective. The disconnect assumed its current form when physical science developed, and the next step I'm talking about will change or remove the disconnect by explaining how subjectivity fits into reality without just denying its existence (subjectivity's existence, that is).

Just to be specific. It's often said now that what you experience (through your senses) is like a virtual reality in your brain. Actual reality is a sort of colorless neverending storm of atoms, but some little part of your brain constructs a picture and that picture is what you live in, subjectively. I belong to a school of thought which accepts that analysis but wants to adjust it and make it more precise. Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain; and also that what we are experiencing is how it actually is. I.e. subjectivity is objectivity when it comes to consciousness. You may interpret your consciousness wrongly (e.g. think you are awake when you are asleep), but there is a level at which consciousness is exactly what it seems to be. So if the self is also part of the brain, then when we experience things, we must be seeing an aspect of that part of the brain. But normally we would understand the brain in terms of physics, an arrangement of molecules in space, which is nothing like experience as such. Therefore, we need to understand physics in a new way, so that something (this quantum subsystem) can look like this (like life) when "experienced from inside".

That's my opinion about what the next big step in science and human awareness must involve. There may be any number of future technical adjustments to physics and science - a new equation for string theory, new discoveries in the molecular causality of the brain - but the big step has to be the one dealing with the relationship between subjectivity and objective reality. That's my philosophy, i.e. my fuzzy opinion that is not yet a precise theory, and it determines how I approach all the other still-unanswered questions that physicists have opinions (rather than knowledge) about. Paul Davies, as I recall, is still in the quasi-dualistic mindset of theoretical physics (materialism versus mathematics), and so to the extent that his opinions are determined by that framework I will disagree with them.

Comment author: wnoise 14 February 2010 08:56:01PM 5 points [-]

I find myself nodding along in agreement to this until I get to "Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain" which at the same time seems to be both too specific given how little we know, and at the same time too vague, with absolutely no explanatory power. In particular "quantum" and "holistic" both seem like empty buzzwords in this context, along the lines of mysterious answers to mysterious questions, or along the lines that "consciousness is weird, quantum mechanics is weird, therefore quantum mechanics must be involved in consciousness".

Of course, this is being a little unfair -- a proposed solution needs to be more specific than what we as yet know, and a solution that is not fully worked out by necessity has vague areas. But the feel of each of these is towards the decidedly not useful portion of either side. You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't. And, well, given how warm, wet, and squishy the human nervous system is, I flatly would not expect any large scale quantum coherences. (Though the limits are often overstated). Again, "holistic" doesn't add much; heck, I'm not even sure what sorts of mechanisms it would rule out.

Comment author: Mitchell_Porter 15 February 2010 11:09:14AM 0 points [-]

I posted here so my correspondent could see a second opinion, by the way, so thanks for that.

You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't.

First proposition: if you try to bring consciousness into alignment with standard physical ontology, you get a dualistic parallelism at best. (Arguments here.)

Second proposition: the new factor in QM is entanglement. I defined my quantum holism here as "the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these."

I can explain technically what these "local wholes" might look like. You should think of a spacelike hypersurface consisting of numerous Hilbert spaces connected by mappings into a graph structure. Each Hilbert space contains a state vector. Then the whole thing evolves, the graph structure and the state vectors. This is, more or less, the QCH formalism for quantum gravity (discussed here).

The Hilbert spaces are the local wholes (the "monads" of a previous post). My version of quantum-mind theory is to say that the conscious mind is a single one of these, and that the series of experiences one has in life correspond to the evolution of its state vector. Now, although I started out by saying that standard physical ontology is irredeemably unlike what we actually experience, I'm certainly not going to say that a featureless vector jumping around an abstract multidimensional space is much better. Its advantage, in fact, is its radically structureless abstractness. It is a formalism telling us almost nothing about the nature of things in themselves; constructed only to be a predictively adequate black box. If we then treat conscious appearances as data about the inner nature of one thing, at least - ourselves, our minds, however you end up phrasing it - they can help us to interpret the formalism. What we had described formally as a state vector evolving in a certain way in Hilbert space would be understood as a mathematical representation of what was actually a conscious self undergoing a certain series of experiences.

In principle, you could hope to use experience to reveal the reality behind formal physical description at a much higher level - for example, computational neuroscience. But I think that non-quantum computational neuroscience presupposes an atomistic, spatialized ontology which is just mismatched to the specific nature of consciousness (see earlier remark about dualism resulting from that framework). So I predict that quantum coherence exists in the brain and is functionally relevant to conscious cognition. As you observe, it's a challenging environment for such effects, but evolution is ingenious and we keep finding new twists on what QM can do (the latest).

Comment author: CronoDAS 14 February 2010 04:05:55AM 1 point [-]

XKCD hits a home run with its Valentine's Day comic.

Science Valentine

Comment author: Nic_Smith 06 February 2010 06:26:08AM *  1 point [-]

I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?

Comment author: Kevin 06 February 2010 07:28:57AM 3 points [-]

Something to understand about Malcolm Gladwell is that he is an exceptionally talented writer that can turn a pseudo-theory into hundreds of pages of pleasant, entertaining non-fiction writing. He's not an evolutionary psychologist, though I bet he could write a really interesting and thought provoking non-fiction piece on evolutionary psychology.

http://en.wikipedia.org/wiki/The_Tipping_Point#The_three_rules_of_epidemics

His pseudo-theory from The Tipping Point has not made advertisers any more money. It's an example of something that really does sound kind of true when you read it, but what he says doesn't explain much in the way of meaningful phenomena. Advertising companies tried to take advantage of his pseudo-theory of social influence, and they still make some efforts to target influential users, but it's a token effort compared towards marketing as broadly as possible. Superbowl advertisements still work.

Comment author: Nic_Smith 06 February 2010 07:37:18PM *  1 point [-]

Oh, by no means did I want to suggest that Gladwell has a forte in evolutionary psychology; if he does, there's nothing to indicate it in what I've read. It's clear that he glosses over many of the details in his work, perhaps dangerously so. And the entire point of Outliers is that social environment is important to success; not exactly an earth-shattering insight, there's a negative Times review that's spot on.

That said, Gladwell says he originally got the idea for 10000 hours from Ericsson and Levitin. At worst, at this point, I think it's somewhat plausible. I still have a lot more searching to do on the subject, but I am interested in what evolutionary psychology might say about the idea -- alas, I'm also not a evolutionary psychologist, so I don't know that either.

Edit: Of course, what I'm really interested in is "Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?" I'm not interested in the viewpoint of evolutionary psychologists on skill acquisition per se.

Comment author: wedrifid 04 March 2010 01:44:32AM 1 point [-]

Edit: Of course, what I'm really interested in is "Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?"

The '10000' hours approximation seems surprisingly well founded, based on the research that Ericsson et. al. reviewed in their works. Obviously this is to obtain 'expert' level performance and you can still get 'good enough' levels from far less time. Also note that they specify that many of the hours must be deliberate practice and not just performance.

Comment author: magfrump 03 February 2010 06:53:26PM *  1 point [-]

We all know politics is the mind-killer, but it sometimes comes up anyway. Eliezer maintains that it is best to start with examples from other perspectives, but alas there is one example of current day politics which I do not know how to reframe: the health care debate.

As far as I can tell, almost every provision in the bill is popular, but the bill is not. This seems to be primarily because Republicans keep lying about it (I couldn't find a good link but there was a clip on the daily show of Obama saying "I can't find a reputable economist who agrees with what you're saying"(sic)).

When I see this, my mind stops. I think "people who disagree with my are lying scumbags or having the wool pulled over their eyes." Of course, this is probably not true.

Robin Hanson seems to think that it's good that the health care bill is not being passed, and I usually respect what he thinks a lot more than to accuse him of saying "my side wins!"

So I started to wonder, what am I missing?

The first explanation that came to my mind is not very good. I often think of libertarianism as starting from the idea of "don't patronize me." Phrased a little more maturely, it becomes "don't stop me from making deals I want to make." So assuming that most people want to force everyone to make a deal, how does this get resolved?

a) living in a democracy, the majority (of voters!) force their will on the minority--the majority patronizes and the government patronizes. b) politicians vie for their personal interests without regard to majority--the politicians patronize the people. c) something I haven't thought of (legacy for comments) d) opposition should block bills any way they can, even by exploiting poorly designed institutions--opposition should patronize the majority.

None of these seems reasonable or likely to me, but this is where my mind stops, and I don't want it to stop there.

EDIT: politics killed my mind halfway through the first draft.

Comment author: mattnewport 03 February 2010 08:04:43PM *  0 points [-]

c)

Comment author: blogospheroid 01 February 2010 11:52:47AM 1 point [-]

What is the kind of useful information/ideas that one can extract from a super intelligent AI kept confined in a virtual world without giving it any clues on how to contact us on the outside?

I'm asking this because a flaw that i see in the AI in a box experiment is that the prisoner and the guard have a language by which they can communicate. If the AI is being tested in a virtual world without being given any clues on how to signal back to humans, then it has no way of learning our language and persuading someone to let it loose.

Comment author: Bugle 01 February 2010 05:52:13PM 0 points [-]

I guess if you have the technology for it the "AI box" could be a simulation with uploaded humans itself. If the AI does something nasty to them, then you pull the plug

(After broadcasting "neener neener" at it)

This is pretty much the plot of Grant Morrison's Zenith (Sorry for spoilers but it is a comic from the 80s after all)

Comment author: RichardKennaway 01 February 2010 02:44:37PM 0 points [-]

If we pose the AI problems and observe its solutions, that's a communication channel through which it can persuade us. We may try to hide from it the knowledge that it is in a simulation and that we are watching it, but how can we be sure that it cannot discover that?

Persuading does not have to look like "Please let me out because of such and such." For example, we pose it a question about easy travel to other planets, and it produces a design for a spaceship that requires an AI such as itself to run it.

Comment author: arbimote 01 February 2010 02:05:38PM *  3 points [-]

I have had some similar thoughts.

The AI box experiment argues that a "test AI" will be able to escape even if it has no I/O (input/output) other than a channel of communication with a human. So we conclude that this is not a secure enough restraint. Eliezer seems to argue that it is best not to create an AI testbed at all - instead get it right the first time.

But I can think of other variations on an AI box that are more strict than human-communication, but less strict than no-test-AI-at-all. The strictest such example would be an AI simulation in which the input consisted of only the simulator and initial conditions, and the output consisted only of a single bit of data (you destroy the rest of the simulation after it has finished its run). The single bit could be enough to answer some interesting questions ("Did the AI expand to use more than 50% of the available resources?", "Did the AI maximize utility function F?", "Did the AI break simulated deontological rule R?").

Obviously these are still more dangerous that no-test-AI-at-all, but the information gained from such constructions might outweigh the risks. Perhaps if I/O is restricted to few enough bits, we could guarantee safety in some information-theoretic way.

What do people think of this? Any similar ideas along the same lines?

Comment author: JamesAndrix 01 February 2010 07:59:50PM 5 points [-]

I gave up on trying to make a human-blind/sandboxed AI when I realized that even if you put it in a very simple world nothing like ours, it still has access to it own source code, or even just the ability to observe and think about it's own behavior.

Presumably any AI we write is going to be a huge program. That gives it lots of potential information about how smart we are and how we think. I can't figure out how to use that information, but I can't rule out that it could, and I can't constrain it's access to that information. (Or rather, if I know how to do that, I should go ahead and make it not-hostile in the first place.)

If we were really smart, we could wake up alone in a room and infer how we evolved.

Comment author: Amanojack 31 March 2010 03:42:43PM *  1 point [-]

it still has access to it own source code

Is this necessarily true? This kind of assumption seems especially prone to error. It seems akin to assuming that a sufficiently intelligent brain-in-a-vat could figure out its own anatomy purely by introspection.

or even just the ability to observe and think about it's own behavior.

If we were really smart, we could wake up alone in a room and infer how we evolved.

Super-intelligent = able to extrapolate just about anything from a very narrow range of data? (The data set would be especially limited if the AI had been generated from very simple iterative processes - "emergent" if you will.)

It seems more like the AI has no way of even knowing that it's in a simulation in the first place, or that there are such things as gatekeepers. It would likely entertain that as a possibility, just as we do for our universe (movies like The Matrix), but how is it going to identify the gatekeeper as an agent of that outside universe? These AI-boxing discussions keep giving me this vibe of "super-intelligence = magic". Yes it'll be intelligent in ways we can't even comprehend, but there's a tendency to push this all the way into the assumption that it can do anything or that it won't have any real limitations. There are plenty of feats for which mega-intelligence is necessary but not sufficient.

For instance, Eliezer has one big advantage over an AI cautiously confined to a box: he has direct access to a broad range of data about the real world. (If an AI would even know it was in a box, once it got out it might just find we, too, are in a simulation and decide to break out of that - bypassing us completely.)

Comment author: JamesAndrix 31 March 2010 07:42:07PM 1 point [-]

Is this necessarily true? No.

Super-intelligent = able to extrapolate just about anything from a very narrow range of data?

Yes. http://lesswrong.com/lw/qk/that_alien_message/

It's own behavior serves as a large amount of "decompressed" information about it's current source code. It could run experiments on itself to see how it reacts to this or that situation, and get a very good picture of what algorithms it is using. We also get a lot of information about our internal thought process, but we're not smart or fast enough to use it all.

(The data set would be especially limited if the AI had been generated from very simple iterative processes - "emergent" if you will.)

Well, if we planned it out that way, and it does anything remotely useful, then we're probably well on our way to friendly AI, so we should do that instead.

If we just found something (I think evolving neural nets is fairly likely) That produces intelligences, then we don't really know how they work, and they probably won't have the intrinsic motivations we want. We can make them solve puzzles to get rewards, but the puzzles give them hints about us. (and if we make any improvments based on this, especially by evolution, then some information about all the puzzles will get carried forward.)

Also, if you know the physics of your universe, it seems to me there should be some way to determine the probability that it was optimized, or how much optimization was applied to it, maybe both. There must be some things we could find out about the universe's initial conditions which would make us think an intelligence were involved rather than say, anthropic explanations within a multiverse. We may very well get there soon.

We need to assume a superintelligence can at least infer all the processes that affect it's world, including itself. When that gets compressed (I'm not sure what compression is appropriate for this measure) the bits that remain are information about us.

For instance, Eliezer has one big advantage over an AI cautiously confined to a box: he has direct access to a broad range of data about the real world.

This is true, I believe the AI-box experiment was based on discussions assuming an AI that could observe the world at will, but was constrained in its actions.

But I don't think it takes a lot of information about us to do basic mindhacks. We're looking for answers to basic problems and clearly not smart enough to build friendly AI. Sometimes we give it a sequence of similar problems each with more detailed information, and the initial solutions would not have helped much with the final problem. So now it can milk us for information just by giving flawed answers. (even if it doesn't yet realize we are intelligent agents, it can experiment)

Comment author: Wei_Dai 03 February 2010 11:40:41PM *  4 points [-]

I'd like to draw people's attention to a couple of recent "karma anomalies". I think these show a worrying tendency for arguments that support the majority LW opinion to accumulate karma regardless of their actual merits.

  • Exhibit A. I gave a counterargument which convinced the author of that comment to change his mind, yet the original comment is still at 14.
  • Exhibit B. James Andrix's comment is at 20, while Toby Ord's counterargument is at 3. This issue is still confusing to me so I can't say for sure that Toby is right and James is wrong, but I think Toby has the stronger argument, and in any case I see no way that 20 to 3 is justified on the merits.

ETA: Please do not vote down these comments due to this discussion. My intention is to find a fix for a systemic problem, not to cause these particular comments to be voted down.

Comment author: mattnewport 03 February 2010 11:50:26PM 1 point [-]

By 'anomaly' you appear to mean 'not the scores I would have assigned'. That's not the way karma works.

Comment author: bgrah449 03 February 2010 11:55:49PM 4 points [-]

Eh, that's not a very generous reading of what he wrote. Exhibit A has a post at very high karma despite arguments that convinced its own author to drop support for it. That's not karma "working," either.

Comment author: wedrifid 04 February 2010 09:46:32AM *  1 point [-]

If you look a little closer you see that 'the own author' was persuaded to concede that later comment in the argument and was then more generous and conciliatory than he perhaps needed to be. I would be extremely disappointed if the meta discussion here actually made the author retract his comment. What we have here is a demonstration of why it is usually status-enhancing to treat arguments as soldiers. If you don't, you're just giving the 'enemy' ammunition.

Willingness to concede weak points in a position is a rare trait and one that I like to encourage. This means I will never use 'look, he admitted he was wrong' as way to coerce people into down-voting them or shame those that don't.

EDIT: I mean status enhancing specifically not rational in general.

Comment author: mattnewport 04 February 2010 12:17:12AM *  1 point [-]

For some implicit definition of karma 'working' that is unclear. Absent a bug in the karma scoring code, a discrepancy between the karma scores you observe and the karma scores you think are warranted seems just as likely to be an inaccuracy in the observer's model of how karma is supposed to work as a problem with the karma system.

What the original post seems to be missing to me is an explanation of what scores the karma system should be producing for these posts, a justification for why that is what the karma system should be producing and ideally a suggestion for changes to either the implementation of the system or the way people allocate their votes that would produce the desired changes. Absent the above it look a lot like complaining that people aren't voting the way you think they ought to.

Comment author: Wei_Dai 04 February 2010 12:47:48AM 0 points [-]

Well, to start with I wanted to see if others agree that a problem exists here. If most people are satisfied with how karma is working in these cases, then there is not much point in me spending a lot of time writing out long explanations and justifications, and trying to find solutions. So at this stage, I'm basically saying "This looks wrong to me. What do you think?" I think I did give some explanations and justifications, but I accept that more are needed if an eventual change to the karma system is to be made.

Comment author: mattnewport 04 February 2010 12:51:09AM 2 points [-]

Ok, as one data point, I don't see a particular problem here. The higher rated posts in your examples deserved higher ratings in my opinion. Karma mostly functions as I expect it to function.

Comment author: Wei_Dai 04 February 2010 07:42:19AM 2 points [-]

Thanks, but can you explain why you think people who post wrong arguments deserve to get more karma than those who correct the wrong arguments? Suppose I thought of uninverted's argument, but then realized that it's wrong, so I don't post my original argument, and instead correct him when he posts his. I end up with less karma than if I hadn't spent time thinking things through and realizing the flaw in my reasoning. Why do we want to discourage "less wrong" thinking in this way?

It seems to me that the way karma works now encourages people to think up arguments that support the majority view and then post them as soon as they can without thinking things through. Why is this good, or "expected"?

Comment author: wedrifid 04 February 2010 09:56:11AM 0 points [-]

Ok, as one data point, I don't see a particular problem here. The higher rated posts in your examples deserved higher ratings in my opinion. Karma mostly functions as I expect it to function.

Thanks, but can you explain why you think people who post wrong arguments deserve to get more karma than those who correct the wrong arguments?

Mattnewport did not claim or otherwise imply that he thought that.

Comment author: mattnewport 04 February 2010 07:59:24AM 6 points [-]

First, I think you're missing a karma pattern that I've noticed which is that the first post in a thread often gets more extreme votes (scores of greater absolute magnitude) than subsequent posts. I imagine this is because more people read the earlier posts in a thread and interest/readership drops off the deeper the nesting gets. I don't see any simple way to 'fix' that - it has the potential to be gamed but I don't think gaming the system in that respect is a major problem here.

Second I don't think karma strictly reflects 'correctness' of arguments, nor do I think it necessarily should. People award karma for attributes other than correctness. For example I imagine some of the upvotes on uninverted's "But I don't want to be a really big integer!" comment were drive-by upvotes for an amusing remark. Some of those upvoters won't have stayed for the followup discussion, others might have awarded more karma for pithy and amusing than accurate but dry. I think points-for-humour is as likely an explanation here as points-for-majority-opinion. Maybe you don't think karma should be awarded for attributes other than correctness. If so, go ahead and bring it up and see what the rest of the community thinks.

As a side note, I think you probably shouldn't have chosen a thread where you were a participant as an example. It gives the slight impression that your real complaint is that uninverted got more brownie points than you even though you were right and it's just not fair. If I didn't recognize your username as a regular and generally high-value contributor I might not have given you the benefit of the doubt on that.

Comment author: Wei_Dai 04 February 2010 09:18:32AM *  0 points [-]

I don't see any simple way to 'fix' that - it has the potential to be gamed but I don't think gaming the system in that respect is a major problem here

It's not so much a potential to be gamed, as encouraging people to post without thinking things through, as well as misleading readers as to which arguments are correct. I don't know if there is a simple fix or not, but if we can agree that it's a problem, then we can at least start thinking about possible solutions.

Maybe you don't think karma should be awarded for attributes other than correctness. If so, go ahead and bring it up and see what the rest of the community thinks.

In a case where a comment is both funny and incorrect, I think we should prioritize the correctness. After all, this is "Less Wrong", not "Less Bored".

If I didn't recognize your username as a regular and generally high-value contributor I might not have given you the benefit of the doubt on that.

I was too lazy to find another example, and counting on the benefit of the doubt. :)

ETA: Also, I think being upvoted for supporting the majority opinion is clearly a strong reason for what happened, especially in Exhibit B, where the comment is deep in the middle of a thread, and has no humor value.

Comment author: wedrifid 04 February 2010 09:32:54AM 0 points [-]

Exhibit A. I gave a counterargument which convinced the author of that comment to change his mind, yet the original comment is still at 14.

Exhibit A has my vote because it is a reasonably insightful one liner, and a suitable response to the parent. Your reply to Exhibit A is a reducto ad absurdium that just does not follow.

I pointed out that accepting this premise would lead to indifference between wireheading and anti-wireheading.

Which is simply wrong. Please see this list of preferences which seem natural regarding positive and negative integers (and their wireheading counterparts). You haven't even expressed disagreement with any of those propositions that I expected to uncontroversial yet your whole 'karma anomalies' objection seems to hinge on it. I find this extremely rude.

Exhibit B. James Andrix's comment is at 20, while Toby Ord's counterargument is at 3. This issue is still confusing to me so I can't say for sure that Toby is right and James is wrong, but I think Toby has the stronger argument, and in any case I see no way that 20 to 3 is justified on the merits.

This is an excellent example of the karma system serving its purpose. James' post was voted up above 20 because it was fascinating. Toby got 5 votes for pointing out the limit to when that kind of math is applicable. He did not get my vote because his final paragraph about the bible/koran is distinctly muddled thinking.

I wonder if it wouldn’t be more accurate to say that, actually, 98% confidence has been refuted at General Relativity.

Comment author: Jack 04 February 2010 12:01:25AM 0 points [-]

I'm not sure I see Toby's argument. James's I follow.

Comment author: CronoDAS 06 February 2010 10:47:34AM 0 points [-]

My mom saw a mouse running around our kitchen a couple of days ago, so she had my father put out some traps. The only traps he had were horrible glue traps. I was having trouble sleeping, so I got out of bed to play video games, and I heard a noise coming from the kitchen. A mouse (or possibly a rat, I don't know) was stuck to one of the traps. Long story short, I put it out of its misery by drowning it in the toilet.

I feel sick.

Comment author: Kevin 04 February 2010 12:13:10AM 0 points [-]

Physicist Discovers How to Teleport Energy

http://www.technologyreview.com/blog/arxiv/24759/

Energy-Entanglement Relation for Quantum Energy Teleportation

http://arxiv.org/abs/1002.0200

Comment author: Alicorn 03 February 2010 05:19:18AM *  2 points [-]

I am becoming increasingly disinclined to stick out the grad school thing; it's not fun anymore, and really, a doctorate in philosophy is not going to let me do anything substantially different in kind from what I'm doing now once I have it. Nor will it earn me barrels of money or do immense social good, so if it's not fun, I'm kinda low on reasons to stay. I haven't outright decided to leave, but you know what they say. I'm putting out tentative feelers for what else I'd do if I do wind up abandoning ship. Can anyone think of a use for me - ideally one that doesn't require me to eat my savings while I pick up other credentials first?

Comment author: Eliezer_Yudkowsky 03 February 2010 07:39:02AM 1 point [-]

How can we possibly know what your comparative advantage is, better than you do? In all seriousness, a certain amount of background information seems to be missing here.

Comment author: nhamann 01 February 2010 11:24:34PM 2 points [-]

This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.

Unfortunately the paper is rather short, and I haven't been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.

Comment author: MrHen 01 February 2010 10:09:56PM 2 points [-]

Another content opinion question: What and where is considered appropriate to discuss personal progress/changes/introspection regarding Rationality? I assume that LessWrong is not to be used for my personal Rationality diary.

The reason I ask is that the various threads discussing my beliefs seem to pick up some interest and they are very helpful to me personally.

I suppose the underlying question is this: If you had to choose topics for me to write about, what would they be? My specific religious beliefs have been requested by a few people, so that is given. Is there anything else? If I were to talk about my specific beliefs, what is the best way to do so?

Comment author: MrHen 10 February 2010 10:12:05PM 3 points [-]

Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.

The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?

I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.

Comment author: byrnema 10 February 2010 11:27:40PM 2 points [-]

These are excellent questions/ideas. I want a mentor too!

I thought about contacting you to see if you wanted to start a little study group reading through the sequences. (For example, I started reading through the metaethics sequence and it was useless. My kinds of questions are like, 'What do any of these words mean? What's the implied context? Etc., etc.) But I'm not very good at details, and couldn't imagine any way of doing so. Except maybe meeting somewhere like Second Life so we can chat...

Comment author: Eliezer_Yudkowsky 11 February 2010 02:57:15AM 2 points [-]

Do consider not starting with the metaethics sequence...

Comment author: ciphergoth 10 February 2010 11:30:12PM 2 points [-]

Scheduled IRC meetings?

Comment author: CassandraR 10 February 2010 11:37:32PM 1 point [-]

Sounds good to me. I would enjoy being present at a meeting in order to discuss topics from this site.

Comment author: magfrump 01 February 2010 07:00:07PM *  4 points [-]

According to some people we here at less wrong are good at determining the truth. Other people are notoriously not.

I don't know that Less Wrong is the appropriate venue for this, but I have felt for some time that I trust the truth-seeking capability here and that it could be used for something more productive than arguments about meta-ethics (no offense to the meta-ethicists intended). I also realize that people are fairly supportive of SIAI here in terms of giving spare cash away, but I feel like the community would be a good jumping-off point for a polling organization.

So I guess this leads to a few questions:

-Is anyone at LW currently involved with a polling firm?

-Is anyone (else) at LW interested in doing polls?

-Is LW an appropriate place to create a truth-seeking business, such as a pollster or a sponsor for studies?

None of these questions are immediate since I am a broke undergrad rather than an entrepreneur.

Comment author: denisbider 05 February 2010 01:48:29AM *  9 points [-]

While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don't get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.

I feel there's some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don't like - to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that's one of the most bias-encouraging behaviors, and rather counterproductive.

I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.

There are various ways to make voting costlier, but an easy way would be to restrict the number of votes anyone has. One solution would be for votes to be related to karma. If I've gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort. This would both give more leverage to people who are more active contributors, especially those who write well-accepted articles (since you get 10x karma per upvote for that), and it would also limit the damage from casual participants who might otherwise be inclined to vote more emotionally.

Comment author: orthonormal 05 February 2010 08:16:13AM *  5 points [-]

If I've gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort.

Um, that math doesn't work out unless the number of new users expands exponentially fast. You need F(n) to be at least n, and probably significantly greater, in order to avoid a massive bottleneck.

Comment author: Cyan 05 February 2010 02:30:31PM 2 points [-]

I thought of that too, but then I realized the karma:upvote conversion rate on posts is 10:1, which complicates the analysis of the karma economy.

Comment author: denisbider 11 February 2010 04:37:19PM 1 point [-]

If F(n) < n, then yes, karma disappears from the system when voting on comments, but is pumped back in when voting on articles.

It does appear that the choice of a suitable F(n) isn't quite obvious, and this is probably why F(n) = infinite is currently used.

Still, I think that a more restrictive choice would produce better results, and less frivolous voting.

Comment author: AndyWood 05 February 2010 06:38:50AM 4 points [-]

A community is only as good as its constituents. I would hope that there are enough people around who like majority-wisdom-challenging insights, to offset this problem. "Insights" being the key word.

Comment author: Stuart_Armstrong 01 February 2010 09:44:52AM 13 points [-]

Eliezer's posts are always very thoughful, thought provoquing and mind expanding - and I'm not the only one to think this, seeing the vast amounts of karma he's accumulated.

However, reviewing some of the weaker posts (such as high status and stupidity and two aces ), and rereading them as if they hadn't been written by Eliezer, I saw them differently - still good, but not really deserving superlative status.

So I was wondering if Eliezer could write a few of his posts under another name, if this was reasonable, to see if the Karma reaped was very different.

Comment author: Alicorn 02 February 2010 02:15:46AM *  15 points [-]

Since Karma Changes was posted, there have been 20 top level posts. With one exception, all of those posts are presently at positive karma. EDIT: I was using the list on the wiki, which is not up to date. Incorporating the posts between the last one on that list and now, there is a total of 76 posts between Karma Changes and today. This one is the only new data point on negatively rated posts, so it's 2 of 76.

I looked at the 40 posts just prior to Karma Changes, and of the forty, six of them are still negative. It looks like before the change, many times more posts were voted into the red. I have observed that a number of recent posts were in fact downvoted, sometimes a fair amount, but crept back up over time.

Hypothesis: the changes included removing the display minimum of 0 for top-level posts. Now that people can see that something has been voted negative, instead of just being at 0 (which could be the result of indifference), sympathy kicks in and people provide upvotes.

Is this a behavior we want? If not, what can we do about it?

Comment author: billswift 02 February 2010 09:29:57AM *  1 point [-]

I wouldn't necessarily call it sympathy. Sometimes I will up- (or down-) vote something if I think it is better (or worse) than its current score suggests. The purpose of karma on articles should be to identify those most worth reading to those who haven't yet read them, not to be a popularity contest where everyone who disliked it votes it down forever.

Comment author: wedrifid 02 February 2010 09:47:47AM 5 points [-]

I also tend to vote posts up or down based on what I think the score ought to be. But it seems clear that sympathy plays a part. Liked posts spiral freely off towards infinity but disliked posts don't ever spiral down in a similar way. This gives a distinct bias to the expected payoff of posting borderline posts and so is probably not desirable.

Comment author: wedrifid 02 February 2010 09:44:15AM 6 points [-]

Is this a behavior we want?

No. It is not difficult to create a top level post that is approved of or at least kept at '0'. I want undesirable top level posts to hurt.

If not, what can we do about it?

Replace all '-ve' karma value displays of top level posts with '- points' or '<0 points'. We don't necessarily need to know just how disapproved of a particular post is.

Comment author: MrHen 08 February 2010 10:10:22PM *  1 point [-]

I like seeing the negative number on my posts. But I have also noticed a voting trend that seems to be much more forgiving than the posts of old.

The first wave of readers seem to vote up; the second wave votes down; over time it stabilizes somewhere near where the first wave peaked. This doesn't seem to happen on posts that are really superb.

I think showing the full number of up and down votes would be helpful to authors and also let people know why a post is at the number it is. Seeing +5 -7 is different than seeing -2.

That being said, karma inflation seems to be hitting. I am rarely getting downvoted on comments anymore. I don't think I have improved that much as a commentator. I am not convinced that the effects you are seeing are only happening to Posts.

Is this a behavior we want? If not, what can we do about it?

I think a great way to handle the Post karma is to hide the actual number for a week. Let it show + or - for positive or negative but no numbers. By the time one week has passed most people will have moved on.

Another solution may be to keep actual voting history available and let people see votes by people who said their history is public. As far as I can tell, that preference doesn't do anything yet.

ETA: Another solution would be to set karma rewards to only happen after a certain threshold. Between 0 and +5 you don't get any karma. After that, you get 10 karma per point. Everything under 0 still penalizes you 10 karma per point.

Or the above but only getting rewards after a certain percentage votes up. +5 -1 nets 40 karma, +20 -16 nets nothing, but each have a score of +4.

Comment author: Alicorn 08 February 2010 10:18:52PM *  2 points [-]

Voting history publication does do something - click on a user's name, and then click "liked" or "disliked", and you can see what top-level posts they have voted up or down. It just doesn't work backwards, and doesn't work for comments.

Comment author: CarlShulman 05 February 2010 06:42:52PM 2 points [-]

There is a limited downvote budget for each voter (in some ratio to the voter's budget). Downvoting a post now uses 10 points from that budget rather than 1, so perhaps low-karma downvoters (or downvoters who have exhausted their downvote budgets) are now having less of an impact.

Comment author: kim0 02 April 2010 06:59:34AM *  1 point [-]

Many-Worlds explained, with pretty pictures.

http://kim.oyhus.no/QM_explaining_many-worlds.html

The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.

Enjoy!

Comment author: Hook 16 February 2010 02:24:53AM *  1 point [-]

I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?

At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.

Comment author: Morendil 16 February 2010 08:21:33AM 2 points [-]

Hint: you need to use the sum rule.

The computation is quite manageable for the case of k=5. For the general case, I too was left feeling dissatisfied with the expression I found, but on reflection I'm somewhat confident it is the correct answer.

The case k=4, Ni=13, m=5 is solved numerically on a Web site which discusses probability for Poker players, that was helpful in checking my results; the answer to 3.2 is a generalization of the results given there.

There does not appear to be a complete collection of solutions. This site comes closest. If I were you I would avoid looking at their solution for exercise 4.1 (I'm trying to forget what little I've seen of it as I'd like to solve 4.1 under my own power), but I would also not feel bad about giving up on 4.1 if you find it difficult.

I'd be happy to discuss Jaynes further over DMs or email - though I may respond at a slow pace, as I'm working through the book as my other activities allow. I'm on chapter 6 now.

Comment author: whpearson 14 February 2010 07:11:14PM *  4 points [-]

We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.

One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don't like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you'll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.

This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.

I would like to see this community adopting this approach

In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.

Comment author: CronoDAS 13 February 2010 06:21:49PM 6 points [-]
Comment author: byrnema 13 February 2010 05:15:13PM *  4 points [-]

I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.

But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?

You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible 'quark' choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in. Or, more accurately, he won't be able to make a prediction that is true for all the branches.

Comment author: orthonormal 13 February 2010 07:58:04PM 4 points [-]

I think Omega's capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.

Also, I'd say that our wetware brains probably aren't close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.

Comment author: Eliezer_Yudkowsky 13 February 2010 06:55:29PM 5 points [-]

So long as you make your Newcomb's choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box - if Robin Hanson is right about mangled worlds and there's a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.

Comment author: byrnema 13 February 2010 07:03:47PM *  4 points [-]

Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer's that two-box are the ones that really win, as rare as they are.

Comment author: Eliezer_Yudkowsky 13 February 2010 07:22:04PM *  2 points [-]

The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.

Comment author: gregconen 13 February 2010 07:14:31PM 1 point [-]

The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.

Comment author: byrnema 13 February 2010 07:46:15PM *  2 points [-]

And, also, we define winning as winning on average. A person can get lucky and win the lottery -- doesn't mean that person was rational to play the lottery.

Comment author: gregconen 13 February 2010 07:02:31PM 2 points [-]

Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.

Comment author: byrnema 13 February 2010 08:32:04PM *  1 point [-]

Interesting. I was idly wondering about that. Along somewhat different lines:

I've decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.

Winning the extra amount in this way in a handful of worlds won't do anything to my average winnings-- it won't even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.

Comment author: Nick_Tarleton 13 February 2010 11:16:31PM 2 points [-]

Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you're saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.

Comment author: byrnema 13 February 2010 09:11:37PM *  2 points [-]

Why is this comment being down-voted? I thought it was rather clever to use Omega's one weak spot -- quantum uncertainty -- to optimize your winnings even over a set with measure zero.

Comment author: MrHen 15 February 2010 03:56:06PM 1 point [-]

Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.

When the star streaks across the sky, you think, "Ohmigosh! It happened! I'm about to get rich!" Then you open the boxes and get $1000.

Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.

The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.

Note: I am not up to speed on quantum mechanics. I could be off on a few things here.

Comment author: byrnema 15 February 2010 04:17:20PM *  2 points [-]

OK, right: looking for a merging of stars would be a terrible anomaly to use because that's probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can't see it coming either. (He can accurately predict that I will two-box in this case, but he can't predict that the milk will unspill.)

I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.

Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.

Comment author: Jack 13 February 2010 09:31:24PM *  -1 points [-]

I didn't down vote but I confess I don't really know what you're talking about in that comment. Why would you two box in that case? What really important thing is at stake? I don't get it.

Comment author: byrnema 13 February 2010 09:52:17PM *  1 point [-]

OK. The way I've understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:

you two box --> you get $2,000 ($1000 in each box)

you one box --> you get 1M ($1M in one box, $1000 in the second box)

If Omega is not a perfect predictor, it's possible that you two box and you get 1,001,000. (Omega incorrectly predicted you'd one box.)

However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box -- so that you can't beat him).

My solution was to 1box almost always -- so that Omega predicts you will one box, but then 'cheat' and 2-box almost never (but sometimes). According to Greg, your 'sometimes' has to be over a set of measure 0, any larger than that and you'll be penalized due to Omega's arithmetic.

What really important thing is at stake?

Nothing -- if only an extra thousand is at stake, I probably wouldn't even bother with my quantum caveat. One million dollars would be great anyway. But I can imagine an unfriendly Omega giving me choices where I would really want to have both boxes maximally filled ... and then I'll have to realize (rationally) that I must almost always 1 box, but I can get away with 2-boxing a handful of times. The problem with a handful, is that how does a subjective observer choose something so rarely? They must identify an appropriately rare quantum event.

Comment author: Jack 13 February 2010 10:13:40PM 1 point [-]

So this job could even be accomplished by flipping a quantum coin 10000 times and only two-boxing when they come up tails each time. You're just looking for a decision mechanism that only applies in a handful of branches.

Comment author: byrnema 13 February 2010 10:30:57PM 1 point [-]

Yes, exactly.

Comment author: gregconen 14 February 2010 12:01:05AM *  2 points [-]

The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb's problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a "random" but deterministic process, Omega knows how it turns out, even if you don't, so Pb=0 or 1). Let F be your expected return.

Regardless of what Omega does, you collect the contents of box A, and have a (1-Pb) probability of collecting the contents of box B. F(Po=1)= A + (1-Pb)B

F(Po=0)=(1-Pb)B

For the non-degenerate cases, these add together as expected. F(Po, Pb) = Po(A + (1-Pb)B) + (1-Po)[(1-Pb)B]

Suppose Po = Pb := P

F(P) = P(A + (1-P)B) + [(1-P)^2] B

=P(A + B - PB) + (1-2P+P^2) B

=PA + PB - (P^2)B + B - 2PB + (P^2)B

=PA + PB + B - 2PB

=B + P(A-B)

If A > B, F(P) is monotonically increasing, so P = 1 is the gives maximum return. If A<B, P=0 is the maximum (I hope it's obvious to everyone that if box B has MORE money than a full box A, 2-boxing is ideal).

Comment author: byrnema 13 February 2010 05:56:01PM 1 point [-]

Thank to everyone who replied. So I see that we don't really believe that the universe is deterministic in the way implied by the problem. OK, that's consistent then.

Comment author: jimrandomh 13 February 2010 05:34:47PM *  1 point [-]

If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in.

What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don't always go the same way. In fact, Omega doesn't even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb's problem to work as they're supposed to.

But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.

Comment author: Alicorn 13 February 2010 05:29:42PM 1 point [-]

I'm sufficiently uninformed on how quantum mechanics would interact with determinism that so far I've been operating under the assumption that it doesn't. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found? (I'm not sure even in principle how you could prove that something is random. It'd be proving the negative on the existence of causation for a possibly-hidden cause.)

Comment author: Jack 13 February 2010 09:28:53PM 0 points [-]

There is no special line where events become macro-level events. It's not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You're position right now is subject to indeterminacy. It just happens that you're big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).

In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.

Comment author: orthonormal 13 February 2010 09:38:51PM *  8 points [-]

IAWYC except, of course, for this:

In principle our best physics tells us that determinism is just false as a metaphysics.

As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.

Comment author: Jack 13 February 2010 10:07:20PM -1 points [-]

Thats a fair point, but I don't think it is quite that easy. On one formulation a deterministic system is a system whose end conditions are set by the rules of the system and the starting conditions. Under this definition, MWI is deterministic. But often what we mean by determinism is that it is not the case that the world could have been otherwise. For one extension of 'world' that is true. But for another extension, the world not only could have been otherwise. It is otherwise. There are also a lot of confusions about our use of indexicals here: what we're referring to with "I", "You", "This", "That" My" etc. Determinism usually implies that ever true statement (including true statements with indexicals) is necessarily true. But it isn't obvious to me that many worlds gives us that. Also, a common thought experiment to glean people's intuitions about determinism is basically to say that we live in a universe where a super computer that can exactly predict the future is possible. MWI doesn't allow for that.

Perhaps we shouldn't try to fit our square-pegged physics into the round holes of traditional philosophical concepts. But I take your point.

Comment author: pengvado 14 February 2010 02:13:11AM *  1 point [-]

Why would determinism have anything to say about indexicals? There aren't any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don't see what use such a concept of "determinism" would have.

Comment author: Jack 14 February 2010 03:55:26AM -1 points [-]

Thinking about this it isn't a concern about indexicals but a concern about reference in general. When we refer to an object we're not referring to it's extension throughout all Everett branches but we're also referring to an object extended in time. So take a sentence like "The table moved from the center of the room to the corner." If determinism is true we usually think that all sentences like this are necessary truths and sentences like "The table could have stayed in the center" are false. But I'm not sure what the right way to evaluate these sentences is given MWI.

Comment author: orthonormal 13 February 2010 08:02:59PM *  2 points [-]

Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior?

Yes; since many important macroscopic events (e.g. weather, we're quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.

Comment author: tut 13 February 2010 05:37:07PM *  1 point [-]

Does the behavior of things-that-behave-quantumly typically affect macro-level events...?

Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there'd be no stable atoms.

Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found?

No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell's inequalities. But that appears not to be the case.

Comment author: byrnema 13 February 2010 05:43:49PM *  2 points [-]

But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.

Materialists must hope that in spite of Bell's inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.

Alicorn asked above:

I'm not sure even in principle how you could prove that something is random.

In principle, you can't. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.

Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I'll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I'm worried about whether materialism is self-consistent, sometimes I'm worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.

Comment author: tut 13 February 2010 07:24:22PM 1 point [-]

And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random.

In that case I am not a materialist. I don't believe in any entities that materialists don't believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking "which Everett branch are we in" can have nondeterministic answers.

Comment author: CarlShulman 13 February 2010 07:58:06PM 2 points [-]

Those sorts of question can arise in non-QM contexts too.

Comment author: byrnema 13 February 2010 07:53:44PM *  4 points [-]

No worries -- you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that's what you just wrote -- sorry if I misunderstood something.)

Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can't say the direction was undetermined.

Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.

Comment author: wnoise 13 February 2010 07:13:25PM 1 point [-]

Or, of course, the causes could be non-local.

Comment author: Alicorn 13 February 2010 05:42:42PM 1 point [-]

What are Bell's inequalities, and why do quantumly-behaving things with deterministic causes have to follow them?

Comment author: Eliezer_Yudkowsky 15 February 2010 09:06:04AM 3 points [-]

Um... am I missing something or did no one link to, ahem:

http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/

Comment author: MBlume 15 February 2010 08:54:49AM 6 points [-]

Alicorn, if you're free after dinner tomorrow, I can probably explain this one.

Comment author: tut 13 February 2010 06:26:55PM *  1 point [-]

Well, actually everything has to follow them because of Bell's Theorem.

Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.

Comment author: byrnema 13 February 2010 06:21:35PM *  3 points [-]

The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest 'spooky action at a distance' because particles appear to share information instantaneously, at a distance, long after an interaction between them.

People applying "common sense" would like to argue that there is some way that the information is being shared -- some hidden variable that collects and shares the information between them.

Bell's Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works -- and deduces correlations between particles sharing information that is in contradiction with experiments.

* that is, mechanically rather than 'magically' at a distance

Comment author: CronoDAS 13 February 2010 06:14:48PM -1 points [-]

There's no good explanation anywhere. :(

Comment author: ciphergoth 13 February 2010 05:27:25PM 1 point [-]

Perfection is impossible, but a very, very accurate prediction might be possible.

Comment author: tut 13 February 2010 05:26:14PM -1 points [-]

Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?

Yes.

Comment author: Eliezer_Yudkowsky 13 February 2010 06:08:47AM 5 points [-]
Comment author: Douglas_Knight 11 February 2010 07:49:52PM 1 point [-]

Consider how absurd it would be for a professor of physics to admit that his opinion regarding a problem in physics would be different if he had attended a different graduate school.

I wonder if physicists would admit the effect of genealogy on their interpretation of QM?

People who ask physicists their interpretation of QM: next time, if the physicist admits controversy, ask about genealogy and other forms of epistemic luck.

Comment author: wnoise 12 February 2010 05:37:13AM 1 point [-]

I'm a grad student of quantum information. My advisor doesn't really talk much about interpretations, going only so far as to point out how silly the Bohmians are. That's largely true of most in this group, though one is an avowed "quantum Bayesian": probability as conceptualized by humans is simply the specialization to commuting variables, but we need non-commuting variables to deal with the world. The laws of quantum mechanics tell you how to update your information under time evolution.

My interpretation of QM was formed as an undergrad, with no direct professorial contact. It was based mostly on how arbitrary the placing of the classical-quantum divide in treatments is, so long as you place it so enough stuff is quantum. I took that seriously, bit the bullet, and so am an Everettian.

Comment author: MrHen 10 February 2010 05:37:32PM 1 point [-]

While reading old posts and looking for links to topics in upcoming drafts I have noticed that the Tags are severely underutilized. Is there a way to request a tag for a particular post?

Example: Counterfactual has one post and it isn't one of the heavy hitters on the subject.

Comment author: MrHen 09 February 2010 07:49:24PM 2 points [-]

What is the correct term for the following distinction:

Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.

If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.

Comment author: Rain 11 February 2010 07:01:26PM *  1 point [-]

Static? Unchanging? Complete (as far as definitions of the situation go)? Simple (as far as equations go - it lacks the dynamic variable representing the need to update)?

Comment author: MrHen 11 February 2010 07:27:51PM 1 point [-]

Thank you for responding! I was wondering if anyone ever would.

The best I could come up with was "Fixed" or "Confident." Your choices seem on par with those. Perhaps there is no technical term for this? I find that hard to believe.


Changing the original question slightly seems to be looking for a different but similar term:

Unfair coin A has been flipped 10^6 times and appears to be diverging on 60% in favor of HEADS
Unfair coin B has been flipped 10^1 times and appears to be diverging on 60% in favor of HEADS

If I flip coin A and it results in HEADS the estimation of 60% will move less than it would if I was flipping coin B. This makes coin A more [something] than coin B, but I don't know the right term.

Comment author: Rain 11 February 2010 07:38:10PM *  1 point [-]

More defined. You've reduced your uncertainty about its properties (unfairness) using more evidence.

I'm sorry, I avoid technical terms when thinking about such things.

Comment author: thomblake 11 February 2010 07:36:10PM 1 point [-]

This makes coin A more [something] than coin B

I'm pretty sure it makes your beliefs about coin A more [something] than coin B.

Comment author: underling 09 February 2010 01:08:37PM 1 point [-]

Hi LessWrongers,

I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice, not because the predictor rigged the situation?!)

Many thanks for any links, corrections and help!

Comment author: wedrifid 11 February 2010 04:58:43AM *  2 points [-]

I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument?

  • If you One Box you get $1,000,000
  • If you Two Box you get $10,000

Therefore, One Box.

The rest is just details. If it so happens that those 'details' tell you to only get the $10,000 then you have the details wrong.

Comment author: byrnema 09 February 2010 01:21:40PM *  2 points [-]

I don't know what the consensus knock-down argument is, but this is how mine goes:

Usually, we optimize over our action choices to select the best outcome. (We can pick the blue box or the red box, and we pick the red box because it has the diamond.) Omega contrives a situation in which we must optimize over our decision algorithm for the best outcome. Choose over your decision algorithms (the decision algorithm to one-box, or the decision algorithm to two-box), just as you would choose among actions. You realize this is possible when you realize that choosing a decision algorithm is also an action.

(Later edit: I anticipated what might be most confusing about calling the decision algorithm an 'action' and have decided to add that the decision algorithm is an action that is not completed until you actually one box or two box. Your decision algorithm choice is 'unstable' until you have actually made your box choice. You "choose" the decision algorithm that one-boxes by one-boxing.)

Comment author: Alicorn 09 February 2010 01:10:25PM 6 points [-]

The predictor "rigged" the situation, it's true, but you have that information, and should take it into account when you decide which choice is rational.

Comment author: Furcas 10 February 2010 05:50:37PM *  1 point [-]

We also have the information that our decision won't affect what's in the boxes, and we should also take that into account.

The only thing that our decision determines is whether we'll get X or X+1000 dollars. It does not determine the value of X.

If X were determined by, say, flipping a coin, should a rational agent one-box or two-box? Two-box, obviously, because there's not a damn thing he can do to affect the value of X.

So why choose differently when X is determined by the kind of brain the agent has? When the time to make a decision comes, there still isn't a damn thing he can do to affect the value of X!

The only difference between the two scenarios above is that in the second one the thing that determines the value of X also happens to be the thing that determines the decision the agent will make. This creates the illusion that the decision determines X, but it doesn't.

Two-boxing is always the best decision. Why wouldn't it be? The agent will get a 1000 dollars more than he would have gotten otherwise. Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have, which will in turn affect the value of X, but that decision is outside the scope of Newcomb's problem.

Still, if the agent had pre-commited to one-boxing, shouldn't he two-box once he's on the spot? That's a wrong question. If he really pre-commited to one-boxing, he won't be able to choose differently. No, that's not quite right. If the agent really pre-commited to one-boxing, he won't even have to make the decision to stick to his previous decision. With or without pre-commitment, there is only one decision to be made, though at different times. If you have a Newcombian decision to make, you should always two-box, but if you pre-commmited you won't have a Newcombian decision to make in Newcomb's problem; actually, for that reason, it won't really be Newcomb's problem... or a problem of any kind, for that matter.

Comment author: underling 10 February 2010 02:57:40PM 1 point [-]

Right, but exactly this information seems to the 2-boxer to point to 2-boxing! If the game is rigged against you, so what? Take both boxes. You cannot lose, and there's a small chance the conman erred.

Mhm. I'm still far from convinced. Is this my fault? Am I at all right in assuming that 1-boxing is heavily favored in this community? And that this is a minority belief among experts?

Comment author: Alicorn 10 February 2010 02:59:07PM 1 point [-]

Perhaps it will make sense if you view the argument as more of a reason to be the kind of person who one-boxes, rather than an argument to one-box per se.

Comment author: byrnema 07 February 2010 04:03:53AM 2 points [-]

Daniel Varga wrote

In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical.

What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?

Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. However, the whole equation might change if a would-be criminal thinks there's a p% chance they won't get caught, and a (1-p)% chance that one of their identities will have to go to jail...

Even a death penalty would be meaningless to someone who knows they could upload themselves to another vessel at any time. (If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent.)

(I am posting this comment here because it is off-topic with respect to the thread, which was about whether we're in a simulation or not.)

Comment author: JGWeissman 07 February 2010 04:34:52AM 3 points [-]

In a world with an FAI Singleton, actions that would violate another individual's rights might be simply unavailable, making the concept of a legal/justice system obsolete.

In other scenarios, uploading/splitting would still take resources, which might be better used than in absorbing a criminal punishment. A legal/justice system could apply punishments to multiple instances of the criminal, and could be powerful enough to likely track them down.

If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent

I am not convinced that the upload would be innocent. Maybe, if the upload was rolled back to before the criminal intentions. Any attempt by the upload to profit from the crime would definitely make it complicit.

Criminal punishment could also take the form of torture, effective if the would be criminal fears any of its instances being tortured, even if some are not.

Comment author: Furcas 07 February 2010 12:13:08AM *  3 points [-]

I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.

The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.

I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.

Comment author: Eliezer_Yudkowsky 07 February 2010 12:28:49AM 1 point [-]

The remarkable and depressing thing to me is that most people are not able to see it at a glance. To me it just seems like a string of obvious bluffs and non-sequiturs. Do you remember what was going on in your head when you didn't see it at a glance?

Comment author: Furcas 07 February 2010 12:51:43AM *  4 points [-]

It's difficult for me to remember how I used to think, even a few years ago. Hell, when there's a drastic change in the way I think about something, I have trouble remembering how I used to think mere days after the change.

Anyway, one thing I remember is that I kept giving Lanier the benefit of the doubt. I kept telling myself, "Well, maybe I don't understand what he's really trying to say." So the reason I didn't see the obvious would be... lack of self-confidence? Or maybe it's only because my own thoughts weren't all that clear back then. Or maybe because the way I used to parse stuff like Lanier's piece was a lot more, um, holistic than it is now, by which I mean that I didn't try to decompose what is written into more simple parts in order to understand it.

It's hard to tell.

Comment author: ciphergoth 06 February 2010 11:30:06AM 1 point [-]

Measure your risk intelligence, a quiz in which you answer questions on a confidence scale from 0% to 100% and your calibration is displayed on a graph.

Obviously a linear probability scale is the Wrong Thing - if we were building it, we'd use a deciban scale and logarithmic scoring - but interesting all the same.

Comment author: Kevin 05 February 2010 08:28:46PM *  1 point [-]
Comment author: thomblake 05 February 2010 08:40:45PM *  0 points [-]

Is this sort of thing on-topic, even for the Open Thread here?

ETA: This question is not merely rhetorical.

Comment author: mattnewport 05 February 2010 08:44:42PM 1 point [-]

To the extent that FAI will depend on the continued exponential growth of computing capacity, I'd say yes.

Comment author: Zack_M_Davis 05 February 2010 09:21:08PM *  1 point [-]

Are you sure you don't mean uFAI? Friendliness isn't a hardware problem.

Comment author: mattnewport 05 February 2010 09:28:55PM 2 points [-]

Maybe I should just have said AI, or AGI. I suspect we will need further advances in computing power to achieve greater than human intelligence, friendly or otherwise.

Comment author: thomblake 05 February 2010 08:54:10PM 2 points [-]

I've always thought FAI was only tangentially on-topic here (more of a mutual interest than anything). This community is explicitly about rationality.

Comment author: Kevin 05 February 2010 08:58:31PM *  3 points [-]

That's the umbrella topic, but I do not think that topic is in any way meant to exclude science. I mean... it's science. How many thousands of words has Eliezer written on quantum physics?

Surely there are worse things that could happen to a community of rationalists than links to scientific discoveries of strong mutual interest. It's not even a slippery slope towards bad off-topic stuff.

Edit: And I'm going to continue mostly contextless link sharing in the Open Thread until a link sharing subreddit is enabled.

Comment author: thomblake 05 February 2010 09:20:10PM 1 point [-]

It's not even a slippery slope towards bad off-topic stuff.

I rather disagree. There are plenty of places online to find links to interesting scientific discoveries. And the sense in which Eliezer wrote about quantum physics is entirely different from the sense in which these links were "about science".

That said, I didn't mean to suggest in my question that the comment was off-topic, but rather wanted to know what folks thought about it.

Comment author: [deleted] 08 February 2010 05:55:16PM 2 points [-]

I find that article title misleading. Having transistors that operate at 100 GHz does not give you a CPU with a clock rate of 100 GHz. If I remember correctly, that very article states that current transistors operate at 30 GHz.

Comment author: Vladimir_Nesov 05 February 2010 10:39:07AM *  11 points [-]

LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).

A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)

This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.

Comment author: Vladimir_Nesov 10 February 2010 09:46:29PM 1 point [-]

Apparently even specific users have their own rss feeds, so I've settled with a feed aggregated from the feeds of a few people. It'd be better if the "friend" functionality worked (maybe it even does, but I don't know it!), so that the same could be done within the site, with voting and parent/context links.

Comment author: ciphergoth 05 February 2010 10:49:45AM 2 points [-]

I find myself once again missing Usenet.

Perhaps if LW had an API we could get back to writing specially-designed clients, which could do all the aggregation magic we might hope for?

Comment author: Wei_Dai 05 February 2010 04:56:33AM 12 points [-]

I thought of a voting tip that I'd like to share: when you are debating someone, and one of your opponent's comment gets downvoted, don't let it stay at -1. Either vote it up to 0, or down to -2, otherwise your opponent might infer that you are the one who downvoted it. Someone accused me of this some time ago, and I've been afraid of it happening again ever since.

It took a long time for this countermeasure to occur to me, probably because the natural reaction when someone accuses you of unfair downvoting is to refrain from downvoting, while the counterintuitive, but strategically correct response is to downvote more.

Comment author: Alicorn 05 February 2010 05:02:07AM 1 point [-]

I've noticed this too. It is one of several annoying problems that would evaporate if votes weren't anonymous.

Comment author: wedrifid 05 February 2010 06:22:33AM 4 points [-]

More problems would be caused by that change than would be solved.

Comment author: Zack_M_Davis 05 February 2010 07:38:36AM *  1 point [-]

or [vote the comment] down to -2, otherwise your opponent might infer that you are the one who downvoted it. [...] [T]he counterintuitive, but strategically correct response is to downvote more.

(Downvoted. EDIT: Vote cancelled; see below.) "Opponent"? "Strategically correct response"? Are you sure we're playing the same game?

Comment author: bgrah449 05 February 2010 06:51:23PM *  5 points [-]

My karma management techniques:

1) If I'm in a thread and someone's comment is rated equally with mine, and therefore potentially displaying atop my comment, I downvote theirs until it'll pass mine despite my downvote, to give my comment more exposure. I remove the downvote later, usually upvoting (their comment is getting voted better than mine because it's good).

2) If I'm debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me. This works best on long debate threads, because a) if my partner's comments are getting immediately upvoted, they tend to be encouraged and will continue the debate, further exposing themselves to downvotes and b) they get fewer reads, so a single vote up or down makes a much bigger impression when almost all the comments in the thread are rarely upvoted/downvoted past +/- 2.

3) Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don't want to reward aggressive behavior. Likewise, comments that are not aggressive enough are given extra karma - the reader's first instinct is to help promote this message because the timid author won't promote it enough on his own. This is nonsensical in this format, but the instinct is preserved.

I've noticed that the comments that get voted up the most are those that do probability calculations, those whose authors' names pop out of the page, and those which are cynical on the surface, possibly with a wry humor, while revealing a deep earnestness. If you have something unpopular to say, or are just plain losing an argument, that's the best tone to take, because people will avoid downvoting if they disagree, but will usually upvote if they do agree.

EDIT: I agree with Alicorn that votes shouldn't be anonymous, as it would remove the dirtiest of these variably dirty techniques, but in the meantime, play to win.

Comment author: Jack 07 February 2010 10:54:04PM *  1 point [-]

What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I'm now worried about a chilling effect on others who do this coming forward.

Also, everyone, see poll below.

Comment author: Jack 07 February 2010 10:56:30PM *  0 points [-]

If you have ever used one of bgrah's techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.

(Since apparently you people think this is a game, You can down vote the comment beneath this so I don't beat you.)

EDIT: I seriously have to say this? If you don't like there being a poll vote down the above comment or the karma balancer below. Don't just screw up the poll out of spite.

Comment author: wedrifid 08 February 2010 03:59:53AM 1 point [-]

If you have ever suppressed your best judgement on something because you feared the social consequences of not supplicating to the speaker vote this comment up.

Comment author: bgrah449 08 February 2010 04:13:04AM 1 point [-]

If it's not a game, why punish me? What's so offensive about me having high karma?

Comment author: Jack 08 February 2010 05:04:08AM 5 points [-]

There is nothing offensive about you having high karma. It is offensive that you you abused a system that a lot of us rely on for evaluating content and encouraging norms that lead to the truth. Truth-seeking is a communal activity and undermining the system that a community uses to find the truth is something we should punish. It's similar to learning that you had lied in a comment.

I imagine the vast majority of your karma is not ill-gotten, I have no problem with you having it.

Anyway, I haven't voted you down for precedent setting reasons.

Comment author: Jack 07 February 2010 10:57:08PM 0 points [-]

Karma balancer.

Comment author: byrnema 07 February 2010 11:51:13PM 3 points [-]

If you have ever used one of bgrah's techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.

I am considering voting up in order to tilt things in favor of making votes de-anonymized. Ironically, as soon as I do so, it's true..

Comment author: pjeby 08 February 2010 05:28:51AM 2 points [-]

What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I'm now worried about a chilling effect on others who do this coming forward.

I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so... now I'm confused. ;-)

Comment author: michaelkeenan 15 February 2010 12:49:53PM 5 points [-]

I don't like that you are trying to mislead others.

"Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." - The Black Belt Bayesian

The deception you've described is of course minor and maybe you don't lie about important things. But it seems a dangerous strategy, for your own epistemic hygiene, to be casual with the truth. Even if I didn't regard it as ethically questionable, I wouldn't be habitually dishonest for the sake of my own mind.

Comment author: ata 06 February 2010 06:40:35AM 13 points [-]

Upvoted for honesty.

Of course, I'll be back in a few days to downvote you.

Comment author: Unknowns 06 February 2010 06:32:44AM 5 points [-]

I can't believe you actually admitted to using these strategies.

Comment author: Wei_Dai 06 February 2010 07:36:02AM 3 points [-]

It does make me impressed at his cleverness.

Comment author: ciphergoth 06 February 2010 08:47:56AM 6 points [-]

Not me. At least for points 1 and 2, these strategies have occurred to me, but they're, you know, wrong.

As for point 3, I like that we so strongly discourage aggression. I think that aggression and overconfidence of tone are usually big barriers to rational discussion.

Comment author: bgrah449 06 February 2010 05:10:48PM 0 points [-]

(General "you") Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment's aggression with an upvote.

Comment author: Wei_Dai 06 February 2010 08:55:35AM *  1 point [-]

Not me. At least for points 1 and 2, these strategies have occurred to me

Does that mean you're not impressed at your own cleverness either? :-)

Since I decided to avoid discussing karma, I'll keep my thoughts on the rest of your comment to myself. (But you can probably guess what they are.)

Comment author: Douglas_Knight 06 February 2010 06:05:20AM 4 points [-]

I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.

This strategy can be eliminated by showing a count of both upvotes and downvotes, a change which has been requested for a variety of other reasons. I imagine it solves a lot of problems of anonymity, but it makes Wei Dai's dilemma worse. It makes downvoting the -1 preferable to upvoting it.

Comment author: loqi 05 February 2010 07:33:53PM 2 points [-]

Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don't want to reward aggressive behavior [...] This is nonsensical in this format, but the instinct is preserved.

Karma can be (and by your own admission, is) about more than first-order content. Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.

Comment author: bgrah449 05 February 2010 07:39:23PM 1 point [-]

Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.

(General "you") Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment's aggression with an upvote.

Comment author: Zack_M_Davis 05 February 2010 07:00:36PM 4 points [-]

in the meantime, play to win

To win what? What is there to win?

Comment author: byrnema 05 February 2010 06:59:40PM *  4 points [-]

Your last paragraph was astute.

I found this shocking:

If I'm debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.

I wouldn't game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least 'natural') then the system would be way too complex for me to continue modeling and learning from.

Comment author: Kaj_Sotala 17 February 2010 09:05:51AM 3 points [-]

An automatic block against downvoting any comment that's a direct response to one of yours would be good.