Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JoshuaZ 02 May 2016 03:29:16PM 1 point [-]

Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.

Comment author: skeptical_lurker 30 March 2016 06:55:14AM 1 point [-]

Getting five downvotes on this immediately after posting is bizarre

When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.

So to be fair, lets assume there's an SJW with a sockpuppet army too. Now both sides can claim its just tit-for-tat.

Comment author: JoshuaZ 30 March 2016 04:34:50PM 4 points [-]

When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.

There was substantially more evidence that VoiceOfRa was downvoting in a retributive fashion, including database evidence.

Comment author: [deleted] 25 December 2015 03:12:04AM 0 points [-]

I mean, there's sound psychological reasons that having karma would increase participation and quality. That's why reddit overtook classic newsboards

In response to comment by [deleted] on Voiceofra is banned
Comment author: JoshuaZ 29 December 2015 09:46:09PM 0 points [-]

Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn't try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.

Comment author: JoshuaZ 27 November 2015 05:59:59PM 3 points [-]

Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.

Comment author: JoshuaZ 06 November 2015 07:35:38PM 0 points [-]

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.

I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.

Comment author: Clarity 06 November 2015 02:05:05PM *  0 points [-]

If you had to select just 5 mutually exclusive and collectively exhaustive variables to predict the the outcome of something *you have expert knowledge (relative to say...me) * about:

  • what is that situation?
  • what are the 5 things that best determine the outcome?

    Please tell us about multiple things if you are an expert at multple things. No time for humility know, it is better that you are kind teacher than a modest mute.

If you can come up with a better way I could ask this, please point it out! It sounds clumsy, but the question has a rather technical background to it's composition:

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing, but consistently good at identifying the determinants that are important to consider in a given thing within their field of expertise, such as what are the variables that determine a given thing. The human working memory can only hold 7+/-2 things at a given time. So, tending to even more stressful situations where our memories may be situanionally brought down to a level of functioning commiserate with someone with a poorer memory, I want to ask for 5 things anyone could think about when they come across one or more of your niches of expertise that they can pay attention to in order to gather the most relevant information from the experience

Comment author: JoshuaZ 06 November 2015 07:31:40PM 2 points [-]

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,

Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.

Comment author: HungryHobo 05 November 2015 05:11:39PM *  2 points [-]

Estimates of nuclear weapons being deployed in a conflict between the 2 states in the next 10 years?

Poll is a probability poll as described here:http://wiki.lesswrong.com/wiki/Comment_formatting#Probability_Poll

values from 0 to 1

Submitting...

Comment author: JoshuaZ 06 November 2015 07:27:35PM 2 points [-]

If people want to lock in their predictions they can do so on Prediction Book here.

Comment author: Lumifer 26 October 2015 02:44:34PM 5 points [-]

I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.

Take Calais. Here is a sympathetic account which is actually a kinda-detective story: a body in a wetsuit washes up on Norway's shore and people are trying to figure out who-what-why. The clues lead to an immigrant camp in Calais and, well, it's pretty clear that the French state lost control there.

Comment author: JoshuaZ 06 November 2015 07:21:50PM 0 points [-]

I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.

In this context that's what relevant, since VoiceOfRa talked about "European countries that have given up enforcing any sense of order in large parts of their major cities." If you aren't talking about that then how is it a relevant response?

Comment author: turchin 24 October 2015 09:16:00PM 0 points [-]

I estimate total probability of human extinction because of SETI attack in 1 per cent. But much smaller in case of this star. There are several needed conjunctions: 1.ET exist but are very far from each other, so communication is wining over travel. 1 milion light years or more. 2. Strong AI is possible.

Comment author: JoshuaZ 25 October 2015 01:32:59AM 0 points [-]

Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.

Comment author: turchin 20 October 2015 02:18:28PM 2 points [-]

They could send information in form of radiowaves and it could be description of unfriendly AI

Comment author: JoshuaZ 24 October 2015 08:20:11PM 1 point [-]

What probability do you assign to this happening? How many conjunctions are involved in this scenario?

View more: Next