They could send information in form of radiowaves and it could be description of unfriendly AI
What probability do you assign to this happening? How many conjunctions are involved in this scenario?
Why wouldn't a giant AC work? Admittedly, you'd need to connect it to the Earth, not just "point it" at us. But an AC is basically a system that uses energy to move heat around; the trick is building one that puts the warm-air exhaust outside the lower atmosphere and gives it escape velocity.
For instance, as long as we're talking mad science, if we could build a space elevator with a big pool of water at the upper end as its counterbalance, cooled by evaporating into space (and maybe by contact with the upper atmosphere?), with a series of tubes connecting the pool with the sea below, then we could run an AC cycle: send warm seawater up, get almost-freezing water down. Of course we'd need a huge throughput to affect global temperature, but the principle is sound :-)
Yes, that would work. I think I was reacting to the phrasing more and imagined something more cartoonish, in particularly where the air conditioner is essentially floating in space.
Thanks for writing this post! I think it contains a number of insightful points.
You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians." As far as I'm concerned, Bayesian statistics is not intended to handle logical uncertainty or reasoning under deductive limitation. It's an answer to the question "if you were logically omniscient, how should you reason?"
You provide examples where a deductively limited reasoner can't use Bayesian probability theory to get to the right answer, and where designing a prior that handles real-world data in a reasonable way is wildly intractable. Neat! I readily concede that deductively limited reasoners need to make use of a grab-bag of tools and heuristics depending on the situation. When a frequentist tool gets the job done fastest, I'll be first in line to use the frequentist tool. But none of this seems to bear on the philosophical question to which Bayesian probability is intended as an answer.
If someone does not yet have an understanding of thermodynamics and is still working hard to build a perpetual motion machine, then it may be quite helpful to teach them about the Carnot heat engine, as the theoretical ideal. Once it comes time for them to actually build an engine in the real world, they're going to have to resort to all sorts of hacks, heuristics, and tricks in order to build something that works at all. Then, if they come to me and say "I have lost faith in the Carnot heat engine," I'll find myself wondering what they thought the engine was for.
The situation is similar with Bayesian reasoning. For the masses who still say "you're entitled to your own opinion" or who use one argument against an army, it is quite helpful to tell them: Actually, the laws of reasoning are known. This is something humanity has uncovered. Given what you knew and what you saw, there is only one consistent assignment of probabilities to propositions. We know the most accurate way for a logically omniscient reasoner to reason. If they then go and try to do accurate reasoning, while under strong deductive limitations, they will of course find that they need to resort to all sorts of hacks, heuristics, and tricks, to reason in a way that even works at all. But if seeing this, they say "I have lost faith in Bayesian probability theory," then I'll find myself wondering about what they thought the framework was for.
From your article, I'm pretty sure you understand all this, in which case I would suggest that if you do post something like this to main, you consider a reframing. The Bayesians around these parts will very likely agree that (a) constructing a Bayesian prior that handles the real world is nigh impossible; (b) tools labeled "Bayesian" have no particular superpowers; and (c) when it comes time to solving practical real-world problems under deductive limitations, do whatever works, even if that's "frequentist".
Indeed, the Less Wrong crowd is likely going to be first in line to admit that constructing things-kinda-like-priors that can handle induction in the real world (sufficient for use in an AI system) is a massive open problem which the Bayesian framework sheds little light on. They're also likely to be quick to admit that Bayesian mechanics fails to provide an account of how deductively limited reasoners should reason, which is another gaping hole in our current understanding of 'good reasoning.'
I agree with you that deductively limited reasoners shouldn't pretend they're Bayesians. That's not what the theory is there for. It's there as a model of how logically omniscient reasoners could reason accurately, which was big news, given how very long it took humanity to think of themselves as anything like a reasoning engine designed to acquire bits of mutual information with the environment one way or another. Bayesianism is certainly not a panacea, though, and I don't think you need to convince too many people here that it has practical limitations.
That said, if you have example problems where a logically omniscient Bayesian reasoner who incorporates all your implicit knowledge into their prior would get the wrong answers, those I want to see, because those do bear on the philosophical question that I currently see Bayesian probability theory as providing an answer to--and if there's a chink in that armor, then I want to know :-)
You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians."
I suspect that there's a large amount of variation in what "Less Wrong Bayesians" believe. It also seems that at least some treating it more as an article of faith or tribal allegiance than anything else. See for example some of the discussion here.
What do you see as productive in asking this question?
Build a giant air conditioner in space and point it towards the Earth.
(Hey, it's just as plausible as expanding the orbit of the entire planet...)
Expanding the orbit of the Earth works under the known laws of physics but wouldn't be practically doable at all. A giant air conditioner wouldn't work for simple physics reasons.
There's nothing for me to respond to.
Let me unroll my ahem.
You claimed this is a mathematical problem, but in the next breath said that math can't solve it. Then what was the point of claiming it to be a math problem in the first place? Just because dealing with it involves numbers? That does not make it a math problem.
The UN
LOL. Can we please stick a bit closer to the real world?
Would a historical example of what you're talking about be the legality of slavery?
Actually, the first example that comes to mind is the when the US decided that all Americans who happen to be of Japanese descent and have the misfortune to live on the West Coast need to be rounded up and sent to concentration, err.. internment camps.
Problems can have a mathematical aspect without being completely solvable by math.
The sourcing there is weak and questionable at best. That people assert that areas are "no-go" is pretty different than there being a genuine lack of any sense of order, and that's even before one looks at the issue of whether this is any different from some areas simply being higher in crime than others.
Still reading, quick note:
tradion
Should be tradition?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I estimate total probability of human extinction because of SETI attack in 1 per cent. But much smaller in case of this star. There are several needed conjunctions: 1.ET exist but are very far from each other, so communication is wining over travel. 1 milion light years or more. 2. Strong AI is possible.
Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.