Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 25 February 2016 12:15:00AM 0 points [-]

What would a man/female focused solely on their biological fitness do in the modern world? In animals, males would procreate with as many females, as they could, with the best scenario where they have different males raise their offsprings. Today, we have morning-after pills, and abortions, it is no longer enough for Pan to pinky-swear he would stay around. How does he alter his strategy? One I can quickly think of is sperm donation, but would that be his optimal strategy? I am certain that the hypothetical sultan from the old days could possibly produce more children, but how do their relative fitnesses compare, taking into account that in the western world, most places' population growths decreased in intensity, or plateaued, or are even falling.

For females, egg donation seems like it should beat older methods hands down.

Would these really be the optimal strategies? In most cases, successful reproduction requires that both sides desire to do so. I am not sure that the level of attractiveness exists at which one could simply put on offer their genes, without a bank as an intermediate. On the other hand, I have heard tales of same-sex couples organizing such endeavors.

Comment author: _rpd 29 February 2016 08:03:33PM -2 points [-]

Apparently being a postman in the 60s and having a good Johnny Cash impression worked out well ...


Comment author: AABoyles 26 February 2016 06:28:15PM 0 points [-]

...Think of the Federation's "Prime Directive" in Star Trek.

Comment author: _rpd 27 February 2016 09:40:47PM 1 point [-]

Or we are an experiment (natural or artificial) that yields optimal information when unmanipulated or manipulated imperceptibly (from our point of view).

Comment author: G0W51 20 February 2016 07:25:54PM 3 points [-]

Is there a term for a generalization of existential risk that includes the extinction of alien intelligences or the drastic decrease of their potential? Existential risk, that is, the extinction of Earth-originating intelligent life or the drastic decrease of its potential, does not sound nearly as harmful if there are alien civilizations that become sufficiently advanced in place of Earth-originating life. However, an existential risk sounds far more harmful if it compromises all intelligent life in the universe, or if there is no other intelligent life in the universe to begin with. Perhaps this would make physics experiments more concerning than other existential risks, as even if their chance of causing the extincion of Earth-originating life is much smaller than other existential risks, their chance of eliminating all life in the universe may be higher.

Comment author: _rpd 23 February 2016 06:36:55AM 1 point [-]

I really like this distinction. The closest I've seen is discussion of existential risk from a non-anthropocentric perspective. I suppose the neologism would be panexistential risk.

Comment author: _rpd 23 February 2016 05:24:13AM 3 points [-]

The desire to know error estimates and confidence levels around assertions and figures, or better yet, probability mass curves. And a default attitude of skepticism towards assertions and figures when they are not provided.

Comment author: SoerenE 19 February 2016 08:16:47PM 0 points [-]

Wow. It looks like light from James' spaceship can indeed reach us, even if light from us cannot reach the spaceship.

Comment author: _rpd 19 February 2016 09:39:18PM 1 point [-]

Yes, until the distance exceeds the Hubble distance of the time, then the light from the spaceship will red shift out of existence as it crosses the event horizon. Wiki says that in around 2 trillion years, this will be true for light from all galaxies outside the local supercluster.

Comment author: SoerenE 19 February 2016 07:14:19AM 0 points [-]

Thank you. It is moderately clear to me from the link that James' thought-experiment is possible.

Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James' spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.

Comment author: _rpd 19 February 2016 06:59:08PM *  1 point [-]

Naively, the required condition is v + dH > c, where v is the velocity of the spaceship, d is the distance from the threat and H is Hubble's constant.

However, when discussing distances on the order of billions of light years and velocities near the speed of light, the complications are many, not to mention an area of current research. For a more sophisticated treatment see user Pulsar's answer to this question ...


... in particular the graph Pulsar made for the answer ...


... and/or the Davis and Lineweaver paper [PDF] referenced in the answer.

Comment author: SoerenE 18 February 2016 07:19:08AM 0 points [-]

I've seen this claim many places, including in the Sequences, but I've never been able to track down an authoritative source. It seems false in classical physics, and I know little about relativity. Unfortunately, my Google-Fu is too weak to investigate. Can anyone help?

Comment author: _rpd 18 February 2016 10:36:57AM 4 points [-]

this claim

Do you mean the metric expansion of space?


Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity.

Comment author: DataPacRat 10 February 2016 11:30:32PM 1 point [-]

Seeking socio-econo-political organizing methods

How many useful ways are there for an uploaded mind, an em, to organize copies of itself to maximize the accuracy of their final predictions?

The few that I've been able to think of:

  • "Strict hierarchy". DPR.2.1 can advise DPR.2, but DPR.2's decision overrides DPR.2.1's.
  • "One em, one vote". DPR.2 gets a vote, and so does DPR.2.
  • "One subjective year, one vote". DPR.2.1 is running twice as fast as DPR.2, and so DPR.2.1 gets twice as many votes.
  • "Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.
  • "Human swarm". Based on https://www.singularityweblog.com/unanimous-ai-louis-rosenberg-on-human-swarming/ .

How many reasonably plausible methods am I missing?

Comment author: _rpd 11 February 2016 04:28:36AM 0 points [-]

"Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.

Taking this a little further, the final prediction can be a weighted combination of the individual predictions, with the weights corresponding to historical or expected accuracy.

However different individuals will likely specialize to be more accurate with regard to different cognitive tasks (in fact, you may wish to set up the reward economy to encourage such specialization), so that the set of weights will vary by cognitive task, or more generally become a weighting function if you can define some sort of sensible topology for the cognitive task space.

Comment author: ChristianKl 10 February 2016 01:03:20PM 0 points [-]

http://www.metaculus.com/questions/ seems to be a good successor to predictionbook. Does anybody know who's responsible for it?

Comment author: _rpd 10 February 2016 01:33:13PM 0 points [-]

AngelList says Anthony Aguirre is the founder.

Comment author: WalterL 09 February 2016 05:09:05PM 1 point [-]

It seems like "should" is doing a lot of heavy lifting in that sentence. If you had to turn that word into a sentence or two to let me understand what you mean, what would it be?

Comment author: _rpd 09 February 2016 08:05:16PM 0 points [-]

I would say that actions that make a particular person happy can have consequences that decrease the collective happiness of some group. I might use a tyrant or an addict as examples. In answering the question "What else are you gonna do?" I'd propose at least "As long as you harm no group happiness, do what makes you happy," the Wiccan Rede "An' ye harm none, do what thou wilt" probably being too strict (rules out being Batman, for example).

View more: Next