In response to Crazy Ideas Thread
Comment author: polymathwannabe 08 July 2015 03:08:12PM 3 points [-]

A single world language should be designed and promoted. Previous attempts have been too Eurocentric to take advantage of all useful grammatical features that are available.

Alternative option: English is already a de facto world language, and it is well suited to borrowing foreign terms when it needs to, but humanity should be ashamed that it conducts its main scientific, commercial and diplomatic operations in a language with such a defective writing system. Spelling reform (or a completely new, purely phonetic alphabet) is urgent. I would advocate adapting Hangul for that purpose.

Comment author: D_Malik 09 July 2015 02:13:19AM 4 points [-]

Playing devil's advocate: Archaic spelling rules allow you to quickly gauge other people's intelligence, which is useful. It causes society to respect stupid people less, by providing objective evidence of their stupidity.

But I don't actually think the benefits outweigh the costs there, and the signal is confounded by things like being a native English-speaker.

In response to Crazy Ideas Thread
Comment author: Raiden 08 July 2015 08:27:14PM 8 points [-]

I always thought that the "most civilizations just upload and live in a simulated utopia instead of colonizing the universe" response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don't remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I think the proposal was to use closed time like curves or something.

This, of course, seemed very far-fetched, but the implications are fascinating. It would be possible to use such a device to simulate an eternity in a moment. We could upload and have an eternity of eudaimonia, without ever having to worry about running out of resources or the heat death of the universe or alien superintelligences. Even if the computer was to be destroyed an instant later, it wouldn't matter to us. If such a thing was possible, then that would be an obvious solution to the Fermi Paradox.

In response to comment by Raiden on Crazy Ideas Thread
Comment author: D_Malik 09 July 2015 02:05:24AM 7 points [-]

If humanity did this, at least some of us would still want to spread out in the real universe, for instance to help other civilizations. (Yes, the world inside the computer is infinitely more important than real civilizations, but I don't think that matters.)

Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.

In response to Crazy Ideas Thread
Comment author: D_Malik 08 July 2015 09:55:52AM *  13 points [-]

Suppose backward time travel is possible. If so, it's probably of the variety where you can't change the past (i.e. Novikov self-consistent), because that's mathematically simpler than time travel which can modify the past. In almost all universes where people develop time travel, they'll counterfactualize themselves by deliberately or accidentally altering the past, i.e. they'll "cause" their universe-instance to not exist in the first place, because that universe would be inconsistent if it existed. Therefore in most universes that allow time travel and actually exist, almost all civilizations will fail to develop time travel, which might happen because those civilizations die out before they become sufficiently technologically advanced.

Perhaps this is the Great Filter. It would look like the Great Filter is nuclear war or disease or whatever, but actually time-consistency anthropics are "acausing" those things.

This assumes that either most civilizations would discover time travel before strong AI (in the absence of anthropic effects), or strong AI does not rapidly lead to a singleton. Otherwise, the resulting singleton would probably recognize that trying to modify the past is acausally risky, so the civilization would expand across space without counterfactualizing itself, so time-consistency couldn't be the Great Filter. They would probably also seek to colonize as much of the universe as they could, to prevent less cautious civilizations from trying time-travel and causing their entire universe to evaporate in a puff of inconsistency.

This also assumes that a large fraction of universes allow time travel. Otherwise, most life would just end up concentrated in those universes that don't allow time travel.

Comment author: G0W51 30 June 2015 04:43:30AM 1 point [-]

Here's a potential existential risk. Suppose a chemical is used for some task or made as a byproduct of another task, especially one that is spread throughout the atmosphere. Additionally, suppose it causes sterility, but it takes a very long time to cause sterility. Perhaps such a chemical could attain widespread use before its deleterious effects are discovered, and by then, it would have already sterilized everyone, potentially causes an existential catastrophe. I know this scenario for causing an existential catastrophe seems very small compared to other risks, but is it worthy of consideration?

Comment author: D_Malik 30 June 2015 05:22:13AM 4 points [-]

Interesting. Very small concentrations of the chemical would have to sterilize practically everyone they contacted - else it would just cause humanity to very rapidly evolve resistance, or maybe kill off the developed world.

Reminds me of the decline in testosterone levels over the past couple decades, which might be due to endocrine-disrupting compounds in the water supply and in plastics and food, but which hasn't been enough to sterilize much of the population.

Comment author: D_Malik 29 June 2015 03:14:06PM 0 points [-]

I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.

One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".

Comment author: D_Malik 29 June 2015 04:38:23AM 1 point [-]

Random thing that I can't recall seeing on LW: Suppose A is evidence for B, i.e. P(B|A) > P(B). Then by Bayes, P(A|B) = P(A)P(B|A)/P(B) > P(A)P(B)/P(B) = P(A), i.e. B is evidence for A. In other words, the is-evidence-for relation is symmetric.

For instance, this means that the logical fallacy of affirming the consequent (A implies B, and B is true, therefore A) is actually probabilistically valid. "If Socrates is a man then he'll probably die; Socrates died, therefore it's more likely he's a man."

Comment author: D_Malik 29 June 2015 04:26:38AM *  4 points [-]

Maybe the differentiable physics we observe is just an approximation of a lower-level non-differentiable physics, the same way Newtonian mechanics is an approximation of relativity.

If physics is differentiable, that's definitely evidence, by symmetry of is-evidence-for. But I have no idea how strong this evidence is because I don't know the distribution of the physical laws of base-level universes (which is a very confusing issue). Do "most" base-level universes have differentiable physics? We know that even continuous functions "usually" aren't differentiable, but I'm not sure whether that even matters, because I have no idea how it's "decided" which universes exist.

Also, maybe intelligence is less likely to arise in non-differentiable universes. But if so, it's probably just a difference of degree of probability, which would be negligible next to the other issues, which seem like they'd drive the probability to almost exactly 0 or 1.

Comment author: VoiceOfRa 17 June 2015 02:49:38AM 3 points [-]

They're r-selected like insects, i.e. their natural reproduction process involves creating lots of children and then allowing most to die.

That doesn't seem like it would lend itself to evolving culture. Specifically, since parents don't invest in their offspring they don't tell them what they've learned. Thus no matter how smart individuals are, knowledge doesn't pass to the next generation.

Comment author: D_Malik 17 June 2015 08:47:57AM 0 points [-]

Perhaps they create lots of children, let most of them die shortly after being born (perhaps by fighting each other), and then invest heavily in the handful that remain. Once food becomes abundant, some parents elect not to let most of their children die, leading to a population boom.

In fact, if you squint a little, humans already demonstrate this: men produce large numbers of sperm, which compete to reach the egg first. Perhaps that would have led to exactly this Malthusian disaster, if it weren't for the fact that women only have a single egg to be fertilized, and sperm can't grow to adulthood on their own.

Comment author: Lumifer 16 June 2015 02:45:58PM 5 points [-]

Your Malthusian collapse seems to be conditional on some particulars of aliens' biology, but the Great Filter has to be very very general and almost universal.

Comment author: D_Malik 17 June 2015 08:41:01AM *  0 points [-]

Agreed. But the Great Filter could consist of multiple Moderately Great Filters, of which the Malthusian trap could be one. Or perhaps there could be, say, only n Quite Porous Filters which each eliminate only 1/n of civilizations, but that happen to be MECE (mutually exclusive and collectively exhaustive), so that together they eliminate all civilizations.

Comment author: Elo 15 June 2015 07:08:16AM 2 points [-]

I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology. Ignoring that for a moment; because in our future we could conclude that vat-brains are more effective at a task than programmed-robots.

I don't think there are many standard ways to look at the ethics of manipulating brains in vats in return for processes. however a stripped down brain essentially becomes a robot. This question is wondering of our understanding of sentience. Which we are really not too sure about at the moment.

I just see biological-based robots with bio-mechanisms of achieving tasks.

Comment author: D_Malik 16 June 2015 07:54:18AM *  0 points [-]

I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology.

I think it could be accomplished with quite primitive technology, especially if the alien biology is robust, and if you just use natural brains rather than trying to strip them down to minimize food costs (which would also make them more worthy of moral consideration). Current human technology is clearly sufficient: humans have already kept isolated brains alive, and used primitive biological brains to control robots. If you connect new actuators or sensors to a mammalian brain, it uses them just fine after a short adaptation period, and it seems likely alien brains would work the same.

View more: Prev | Next