Comment author: Vladimir_Nesov 08 December 2010 08:43:34PM *  0 points [-]

The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It's difficult (for me) to estimate whether it's so.

Comment author: xamdam 09 December 2010 04:05:40PM *  0 points [-]

I suspect it's also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of "having an agenda", as there is significant opportunity to do harm either way.

Comment author: TheOtherDave 08 December 2010 07:55:59PM 4 points [-]

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

If you're simply objecting to them via rhetorical question, I've got nothing useful to add.

If it matters, I haven't downvoted anyone on this thread, though I reserve the right to do so later.

Comment author: xamdam 08 December 2010 09:58:34PM 0 points [-]

To be fair, I think the parent of the downvoted comment also has status implications:

I think you're nitpicking to dodge the question

It's a serious accusation hurled at the wrong type of guy IMO - Vladimir probably takes the objectivity award on this forum. I think his response was justified and objective, as usual.

Comment author: Vladimir_Nesov 08 December 2010 11:22:39AM *  9 points [-]

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

Comment author: xamdam 08 December 2010 08:37:17PM *  2 points [-]

whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)

Comment author: Vladimir_Nesov 08 December 2010 07:50:19PM 1 point [-]

Why do you communicate things like this publicly? It takes other people's attention, even if for a bit, where there seems to be no reason whatsoever for that to happen. It's an error that costs you and others almost nothing, but an error nonetheless.

Comment author: xamdam 08 December 2010 08:04:15PM *  0 points [-]

I suspect it's for the same reason I occasionally litter by accident and not pick it up; it's a negative externality but the cost of self monitoring all the time is greater. I'd get worried if it goes over a (small) threshold. People like the communication for non-informational reasons and occasionally speech-litter.

In response to Were atoms real?
Comment author: xamdam 08 December 2010 06:28:14PM 1 point [-]

[5] Thus, to extend this conjecturally toward our original question: when someone asks "Is the physical world 'real'?" they may, in part, be asking whether their predictive models of the physical world will give accurate predictions in a very robust manner, or whether they are merely local approximations. The latter would hold if e.g. the person: is a brain in a vat; is dreaming; or is being simulated and can potentially be affected by entities outside the simulation.

Hmm. Let's say we live in a multiverse where there are infinitely many universes with laws we cannot compute, so our laws are very much local (but not necessarily approximations). Would it make the world as we know it less real? I would not feel that.

On the other hand living in a simulation would feel unreal, though it might be based on a fantasy that you can 'break out' somehow.

Another use of the term is authenticity; e.g. I'd be proud to own a book signed by Churchill, but ashamed if it was a fake. (Physical laws to not dictate either way - it could have been authentic). This last example makes me think that it's going to be hard to disentangle the term from its psychological connotations.

Comment author: xamdam 06 December 2010 01:15:18AM *  -1 points [-]

China is planning to sequence the full genome of 1000 of its brightest kids

Terrance Tao, run and hide!

Comment author: xamdam 05 December 2010 01:21:22AM 0 points [-]
Comment author: Wei_Dai 14 August 2009 08:38:23AM 0 points [-]

I'm not sure why you're so bothered by that article. There's nothing wrong with my game theory, as far as I can tell, and I think historically, the phenomenon described must have played some role in the evolution of intelligence. So why should I retract it?

Comment author: xamdam 28 November 2010 06:20:49PM *  0 points [-]

I think historically, the phenomenon described must have played some role in the evolution of intelligence. So why should I retract it?

I do not think the article suggests any non-toy scenario where such situations might have reasonably arisen.

My personal favorite reason for "why are we not more intelligent species" is that the smart ones don't breed enough :)

Comment author: xamdam 28 November 2010 03:16:04PM 7 points [-]

So I actually read the book; while there is a little "dis" in there, but the portrait is very partial: "Nate Caplan, my IQ is 160" of "OverpoweringFalsehood.com" is actually pictured as the rival of the "benign SuperIntelligence Project" (a stand-in for SIAI I presume, which is dissed in its own right of course). I think it's funny flattering and wouldn't take it personally at all, doubt Eliezer would in any case.

BTW the book is Ok, I prefer Egan in far-future mode than in near-future.

Comment author: JoshuaFox 25 November 2010 08:04:02AM 6 points [-]

If you want to talk about "ancestral environment," then note that infanticide is quite common in many cultures, as far as I can tell including hunter-gatherers.

Comment author: xamdam 25 November 2010 12:16:54PM 3 points [-]

variation in SIDS across socio-economic spectrum suggest infanticide is quite common in our culture.

View more: Prev | Next