Comment author: Vaniver 07 June 2012 04:13:19AM 9 points [-]

One of the things I've found useful when doing that comparison is figuring out which existential risks will actually make Earth less livable than other planets. If, say, the worst-case CO2-based climate change scenario comes to pass, Earth will still be a better place to live than Mars, and easier to terraform to human-optimal than Mars will be.

Again, when considering something like nuclear war, I believe a post-apocalyptic Earth would be better than an untouched Mars.

Against other existential risks- like UFAI- I don't think fleeing helps. There's probably one or two where it does- but even for stuff like gamma ray bursts or asteroid strikes the back of the envelope calculations I've done in the past have suggested fleeing Earth is a bad plan.

Comment author: D2AEFEA1 07 June 2012 05:32:42AM *  7 points [-]

While Earth would be easier to terraform due to available resources and global conditions already closer to something inhabitable, it would not be safer, as mistakes in the terraforming process are not going to be as catastrophic when you try to terraform a backup, uninhabited planet.

Toying with complex, poorly understood processes at a time when we wouldn't even have our current resources, manpower on a ravaged Earth whose environment might just be one wrong step from becoming much worse, could destroy a majority of what remains of humanity, the economy and valuable resources, making it impossible for us to ever recover.

(I am however assuming we were talking about global terraforming of the whole planet, not making minute changes to local spots)

Comment author: D2AEFEA1 04 June 2012 10:56:07AM 2 points [-]

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.

You're assuming that there exists something like our universe, with at least one full human being like you having beliefs causally entwined with Obama existing. What if there is none, and there are only Boltzmann brains or something equivalent?

In a Boltzmann brain scenario, how can you even assume that the universe in which they appear is ruled by the same laws of physics as those we seemingly observe? After all, the observations and beliefs of a Boltzmann brain aren't necessarily causally linked to the universe that generated it.

You could well be a single "brain" lost in a universe whose laws make it impossible for something like our own Hubble volume to exist, where all your beliefs about physics, including beliefs about Boltzmann brains, is just part of the unique, particular beliefs of that one brain.

Comment author: pragmatist 04 June 2012 08:28:20AM *  -6 points [-]

your Boltzman brain copy doesn't know it can't have beliefs about Barack Obama

Sure, but I do. I have beliefs about Obama, and I know I can have such beliefs. Surely we're not radical skeptics to the point of denying that I possess this knowledge. And that's my point: I know things my Boltzmann brain copy can't, so we're evidentially distinguishable.

Comment author: D2AEFEA1 04 June 2012 10:46:03AM 0 points [-]

Wait, would an equivalent way to put it be evidential as in "as viewed by an outside observer" as opposed to "from the inside" (the perspective of a Boltzmann brain)?

Comment author: private_messaging 03 June 2012 03:24:43PM *  5 points [-]

Ohh, that's easily the one on which you guys can do most harm by associating the safety concern with crankery, as long as you look like cranks but do not realize it.

Speaking of which, use of complicated things you poorly understand is a sure fire way to make it clear you don't understand what you are talking about. It is awesome for impressing people who understand those things even more poorly or are very unconfident in their understanding, but for competent experts it won't work.

Simple example [of how not to promote beliefs]: idea that Kolmogorov complexity or Solomonoff probability favours many worlds interpretation because it is 'more compact' [without having any 'observer']. Why wrong: if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI). Why stupid: because if you don't require that, then the iterator through all possible physical theories is the lowest complexity 'explanation' and we're back to square 1. How it affects other people's opinion of your relevance: very negatively for me. edit: To clarify, the argument is bad, and I'm not even getting into details such as non-computability, our inability to represent theories in the most compact manner (so we are likely to pick not the most probable theory but the one we can compactify easier), machine/language dependence etc etc etc.

edit: Another issue: there was the mistake in phases in the interferometer. A minor mistake, maybe (or maybe the i was confused with phase of 180, in which case it is a major misunderstanding). But the one that people whom refrain of talking of the topics they don't understand, are exceedingly unlikely to make (its precisely the thing you double check). Not being sloppy with MWI and Kolmogorov complexity etc is easy: you just need to study what others have concluded. Not being sloppy with AI is a lot harder. Being less biased won't in itself make you significantly less sloppy.

Comment author: D2AEFEA1 04 June 2012 10:28:47AM *  0 points [-]

Most of this seems unrelated to what the OP says. Are you sure you posted this in the right place?

In response to "Progress"
Comment author: Ezekiel 04 June 2012 08:50:47AM 0 points [-]

Quick poll: Who here has actually met someone who thinks democracy arises inevitably from human nature?

In response to comment by Ezekiel on "Progress"
Comment author: D2AEFEA1 04 June 2012 10:24:23AM 3 points [-]
Comment author: steven0461 31 May 2012 05:53:38AM 5 points [-]

I've been wondering how to "fix it" but I have nothing concrete.

Letting go of the assumption that every user account's votes should have the same weight would probably go a long way. I'm not saying such a measure is called for right now; I'm just bringing it up to get people used to the idea if things get worse.

Comment author: D2AEFEA1 31 May 2012 02:56:00PM 0 points [-]

I would second that. On the other hand, how would you decide what weight to give to someone's vote? Newcomers vs older members? Low vs high karma? I'm not sure a function of both these variables would be sufficient to determine meaningful voting weights (that is, I'm not sure such a simple mechanism would be able to intelligently steer more karma towards good quality posts even if they were hidden, obscure or too subtle).

Comment author: RobertLumley 31 May 2012 01:42:16PM 5 points [-]

Letting go of the assumption that karma means much above -3 would also go a long way. Karma is just here really to keep trolls away. If there are vast differences in Karma scores posted from around the same time, then maybe that means something. I know personally that the comments and posts I am most proud of are, generally speaking, my least upvoted ones.

To consider an example, this and this were posted around the same time, both to discussion. The former initially received vastly more karma than the second. But the former, while amusing, has virtually no content. The second is a well reasoned, well supported post. Did the former's superior karma mean that it was a better article? Obviously not. That's why the second was promoted and, once it was, eventually overtook the former.

Another obvious example is the sequences. Probably everyone here would agree that at least 75 of the best 100 posts on LW are from the sequences. But, for the most part, they sit at around 10-20 karma. Those that are outside that are the extraordinarily popular ones, which are linked to a lot, and sit at probably around 40 karma. This is not an accurate reflection of their quality versus other articles that I see around 10-40 karma.

I really try (but don't always succeed) to vote karma based on "Is this comment/post at a higher or lower karma score than I think it should have?". If everyone used this, then Karma scores might have some meaning relative to each other. But I don't think many people use this strategy, and the result is that karma scores are skewed towards more read and funnier posts. Which generally tend to be shorter and less substantial.

Comment author: D2AEFEA1 31 May 2012 02:46:32PM 1 point [-]

Would it be difficult (and useful) to change the voting system inherited from reddit and implement one where casting a vote would rate something on a scale from minus ten to ten, and then average all votes together?

Comment author: XiXiDu 31 May 2012 09:09:50AM *  3 points [-]

...say, at the top math, computer science, and formal philosophy departments in the English-speaking world.

People at top academic departments everywhere in the world speak English... (which is probably true even for the janitor when it comes to some western countries).

Comment author: D2AEFEA1 31 May 2012 02:37:06PM 1 point [-]

How well do they though? I've seen a few academics from around me having enough command of English to get by, but they might still miss some of the subtle points. They just can't reason as well in English as they do in their mother tongue.

Comment author: D2AEFEA1 29 May 2012 06:39:38AM 6 points [-]

labeling a death as "heroic" can be a similar sort of rationalization.

Homer, about 2800 years ago :

It is entirely seemly for a young man killed in battle to lie mangled by the bronze spear. In his death all things appear fair.

Comment author: D2AEFEA1 20 May 2012 12:10:41PM *  2 points [-]

Strategies would be different for an individual as opposed to societies. Both would as a first approximation only be as cautious as they need to be in order to preserve themselves. That's where the difference between local and global disasters comes into play.

A disaster that can kill an individual won't usually kill a society. The road to progress for society has been paved by countless individual failures, some of which took a heavy toll, but in the end they never destroyed everything. It may be a gamble for an individual to take a risk that could destroy it, and risk-averse people will avoid it. But for society as a whole, non-risk averse individuals will sometime strike the motherlode, especially as the risk to society (the loss of one or a few individuals out of the adventurous group at a time) is negligible enough. Such individuals could therefore conceivably be an asset. They'd explore venues past certain local optima for instance. This would also benefit those few individuals who'd be incredibly successful from time to time, even if most people like them are destined to remain in the shadow.

Of course nowadays even one person could fail hard enough to take everything along with it. That mat be why you get that impression that rational people are perhaps too cautious, and could hamper progress. The rules for the game have changed, you can't just be careless anymore.

View more: Next