Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JoshuaZ 27 November 2015 05:59:59PM 2 points [-]

Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.

Comment author: JoshuaZ 06 November 2015 07:35:38PM 0 points [-]

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.

I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.

Comment author: Clarity 06 November 2015 02:05:05PM *  0 points [-]

If you had to select just 5 mutually exclusive and collectively exhaustive variables to predict the the outcome of something *you have expert knowledge (relative to say...me) * about:

  • what is that situation?
  • what are the 5 things that best determine the outcome?

    Please tell us about multiple things if you are an expert at multple things. No time for humility know, it is better that you are kind teacher than a modest mute.

If you can come up with a better way I could ask this, please point it out! It sounds clumsy, but the question has a rather technical background to it's composition:

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing, but consistently good at identifying the determinants that are important to consider in a given thing within their field of expertise, such as what are the variables that determine a given thing. The human working memory can only hold 7+/-2 things at a given time. So, tending to even more stressful situations where our memories may be situanionally brought down to a level of functioning commiserate with someone with a poorer memory, I want to ask for 5 things anyone could think about when they come across one or more of your niches of expertise that they can pay attention to in order to gather the most relevant information from the experience

Comment author: JoshuaZ 06 November 2015 07:31:40PM 2 points [-]

Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,

Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.

Comment author: HungryHobo 05 November 2015 05:11:39PM *  2 points [-]

Estimates of nuclear weapons being deployed in a conflict between the 2 states in the next 10 years?

Poll is a probability poll as described here:http://wiki.lesswrong.com/wiki/Comment_formatting#Probability_Poll

values from 0 to 1


Comment author: JoshuaZ 06 November 2015 07:27:35PM 2 points [-]

If people want to lock in their predictions they can do so on Prediction Book here.

Comment author: Lumifer 26 October 2015 02:44:34PM 5 points [-]

I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.

Take Calais. Here is a sympathetic account which is actually a kinda-detective story: a body in a wetsuit washes up on Norway's shore and people are trying to figure out who-what-why. The clues lead to an immigrant camp in Calais and, well, it's pretty clear that the French state lost control there.

Comment author: JoshuaZ 06 November 2015 07:21:50PM 0 points [-]

I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.

In this context that's what relevant, since VoiceOfRa talked about "European countries that have given up enforcing any sense of order in large parts of their major cities." If you aren't talking about that then how is it a relevant response?

Comment author: turchin 24 October 2015 09:16:00PM 0 points [-]

I estimate total probability of human extinction because of SETI attack in 1 per cent. But much smaller in case of this star. There are several needed conjunctions: 1.ET exist but are very far from each other, so communication is wining over travel. 1 milion light years or more. 2. Strong AI is possible.

Comment author: JoshuaZ 25 October 2015 01:32:59AM 0 points [-]

Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.

Comment author: turchin 20 October 2015 02:18:28PM 2 points [-]

They could send information in form of radiowaves and it could be description of unfriendly AI

Comment author: JoshuaZ 24 October 2015 08:20:11PM 1 point [-]

What probability do you assign to this happening? How many conjunctions are involved in this scenario?

Comment author: DanArmak 24 October 2015 03:11:36PM *  5 points [-]

Why wouldn't a giant AC work? Admittedly, you'd need to connect it to the Earth, not just "point it" at us. But an AC is basically a system that uses energy to move heat around; the trick is building one that puts the warm-air exhaust outside the lower atmosphere and gives it escape velocity.

For instance, as long as we're talking mad science, if we could build a space elevator with a big pool of water at the upper end as its counterbalance, cooled by evaporating into space (and maybe by contact with the upper atmosphere?), with a series of tubes connecting the pool with the sea below, then we could run an AC cycle: send warm seawater up, get almost-freezing water down. Of course we'd need a huge throughput to affect global temperature, but the principle is sound :-)

Comment author: JoshuaZ 24 October 2015 08:12:49PM 1 point [-]

Yes, that would work. I think I was reacting to the phrasing more and imagined something more cartoonish, in particularly where the air conditioner is essentially floating in space.

Comment author: So8res 23 October 2015 11:44:22PM *  8 points [-]

Thanks for writing this post! I think it contains a number of insightful points.

You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians." As far as I'm concerned, Bayesian statistics is not intended to handle logical uncertainty or reasoning under deductive limitation. It's an answer to the question "if you were logically omniscient, how should you reason?"

You provide examples where a deductively limited reasoner can't use Bayesian probability theory to get to the right answer, and where designing a prior that handles real-world data in a reasonable way is wildly intractable. Neat! I readily concede that deductively limited reasoners need to make use of a grab-bag of tools and heuristics depending on the situation. When a frequentist tool gets the job done fastest, I'll be first in line to use the frequentist tool. But none of this seems to bear on the philosophical question to which Bayesian probability is intended as an answer.

If someone does not yet have an understanding of thermodynamics and is still working hard to build a perpetual motion machine, then it may be quite helpful to teach them about the Carnot heat engine, as the theoretical ideal. Once it comes time for them to actually build an engine in the real world, they're going to have to resort to all sorts of hacks, heuristics, and tricks in order to build something that works at all. Then, if they come to me and say "I have lost faith in the Carnot heat engine," I'll find myself wondering what they thought the engine was for.

The situation is similar with Bayesian reasoning. For the masses who still say "you're entitled to your own opinion" or who use one argument against an army, it is quite helpful to tell them: Actually, the laws of reasoning are known. This is something humanity has uncovered. Given what you knew and what you saw, there is only one consistent assignment of probabilities to propositions. We know the most accurate way for a logically omniscient reasoner to reason. If they then go and try to do accurate reasoning, while under strong deductive limitations, they will of course find that they need to resort to all sorts of hacks, heuristics, and tricks, to reason in a way that even works at all. But if seeing this, they say "I have lost faith in Bayesian probability theory," then I'll find myself wondering about what they thought the framework was for.

From your article, I'm pretty sure you understand all this, in which case I would suggest that if you do post something like this to main, you consider a reframing. The Bayesians around these parts will very likely agree that (a) constructing a Bayesian prior that handles the real world is nigh impossible; (b) tools labeled "Bayesian" have no particular superpowers; and (c) when it comes time to solving practical real-world problems under deductive limitations, do whatever works, even if that's "frequentist".

Indeed, the Less Wrong crowd is likely going to be first in line to admit that constructing things-kinda-like-priors that can handle induction in the real world (sufficient for use in an AI system) is a massive open problem which the Bayesian framework sheds little light on. They're also likely to be quick to admit that Bayesian mechanics fails to provide an account of how deductively limited reasoners should reason, which is another gaping hole in our current understanding of 'good reasoning.'

I agree with you that deductively limited reasoners shouldn't pretend they're Bayesians. That's not what the theory is there for. It's there as a model of how logically omniscient reasoners could reason accurately, which was big news, given how very long it took humanity to think of themselves as anything like a reasoning engine designed to acquire bits of mutual information with the environment one way or another. Bayesianism is certainly not a panacea, though, and I don't think you need to convince too many people here that it has practical limitations.

That said, if you have example problems where a logically omniscient Bayesian reasoner who incorporates all your implicit knowledge into their prior would get the wrong answers, those I want to see, because those do bear on the philosophical question that I currently see Bayesian probability theory as providing an answer to--and if there's a chink in that armor, then I want to know :-)

Comment author: JoshuaZ 24 October 2015 12:26:21PM 3 points [-]

You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians."

I suspect that there's a large amount of variation in what "Less Wrong Bayesians" believe. It also seems that at least some treating it more as an article of faith or tribal allegiance than anything else. See for example some of the discussion here.

Comment author: JoshuaZ 24 October 2015 12:10:40PM 6 points [-]

What do you see as productive in asking this question?

View more: Next