What does ELK stand for here?
This is probably the best argument I have seen yet for being concerned about what things like GPT are going to be able to do. Very eye opening.
66.42512077294685%
This should not be reported this way. It should be reported as something like 66%. The other digits are not meaningful.
I don't know of any broader, larger trends. It is worth noting here that the Rabbis of the Talmud themselves thought that the prior texts (especially the Torah itself) were infallible, so it seems that part of what might be happening is that over time, more and more gets put into the very-holy-text category.
Also, it seems important to distinguish here between being unquestionably correct with being literal. In a variety of different religions this becomes an important distinction and often a sacrifice of literalism is in practice made to preserve correctn...
MWI doesn't say anything about other constants- the other parts of our wavefunction should have the same constants. However, other multiverse hypotheses do suggest that physical constants could eb different.
That seems like an accurate analysis.
I'm actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.
Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.
When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.
There was substantially more evidence that VoiceOfRa was downvoting in a retributive fashion, including database evidence.
Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn't try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.
Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.
The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.
I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,
Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.
I am not making claims about "any sense of order", but going by what I read European police lost control of some chunks of its territory.
In this context that's what relevant, since VoiceOfRa talked about "European countries that have given up enforcing any sense of order in large parts of their major cities." If you aren't talking about that then how is it a relevant response?
Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.
What probability do you assign to this happening? How many conjunctions are involved in this scenario?
Yes, that would work. I think I was reacting to the phrasing more and imagined something more cartoonish, in particularly where the air conditioner is essentially floating in space.
You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That's likely true of many subjective Bayesians, but I don't think it's true of most "Less Wrong Bayesians."
I suspect that there's a large amount of variation in what "Less Wrong Bayesians" believe. It also seems that at least some treating it more as an article of faith or tribal allegiance than anything else. See for example some of the discussion here.
What do you see as productive in asking this question?
Expanding the orbit of the Earth works under the known laws of physics but wouldn't be practically doable at all. A giant air conditioner wouldn't work for simple physics reasons.
Problems can have a mathematical aspect without being completely solvable by math.
The sourcing there is weak and questionable at best. That people assert that areas are "no-go" is pretty different than there being a genuine lack of any sense of order, and that's even before one looks at the issue of whether this is any different from some areas simply being higher in crime than others.
Still reading, quick note:
tradion
Should be tradition?
That seems to indicate that summarizing what they've said as the average age of death being 72 years is not accurate.
Not far indeed: global life expectancy at birth was 26 years in the Bronze Age, and in 2010 was 67.2. Five years ago our life expectancy at birth was more than double what it had been.
This is a little misleading because low life expectancy at birth was to a large extent a function of very high infant mortality. It is true that even if one takes into account infant mortality (for example by looking at life expectancy at three years of age) that life expectancy has gone up. However, this is primarily average life expectancy. Maximum life expectancy has ba...
Good analysis! A few remarks:
In practice even for a planet with as thin an atmosphere as Earth, getting past the atmosphere is more difficult than actually reaching escape velocity. One of the most common times for a rocket to break up is near Max Q which is where maximum aerodynamic stress occurs. This is generally in the range of about 10 km to 20 km up.
In worlds too big to escape by propulsion, people may come up with the idea of the space elevator, but the extra gravity will require taking into account the structure's weight.
Getting enough mass u...
Or there are fewer civilizations than we expect, or something is wiping out civilizations once they go to space, or most species for whatever reason decide not to go to space, or we are living in an ancestor simulation which only does a detailed simulation of our solar system. (I agree that all of these are essentially wanting, your interpretation makes the most sense, these examples are listed more for completeness than anything else.)
Anyone want to take bets on whether or not this will turn out in ten years to be natural?
I don't think this conversation is being very productive so this is likely my final reply.
Just answer me a simple question.
? How do the first 1000 naturals look like, after mixing supertask described above has finished its job,
You may say that this supertask is impossible.
You may say that there is no set of all naturals.
The resulting pointwise limit exists, and it gives each positive integer a probability of zero. This is fine because the pointwise limit of a distribution on a countable set is not necessarily itself a distribution. Please take a basic real analysis course.
I don't give a damn about infinity. If it is doable, why not? But is it? That's the only question.
I'm not sure what you mean by this, especially given your earlier focus on whether infinity exists and whether using it in physics is akin to religion. I'm also not sure what "it" is in your sentence, but it seems to be the supertask in question. I'm not sure in that context what you mean by "doable."
...Then, a supertask mixes the infinite set of naturals and we are witnessing "the irresistible force acting on an unmovable object&quo
This is not at all an attempt to banish infinity in any general sense.
Of course it is. Nothing infinite has been spotted so far.
I'm not sure how your sentence is a response to my sentence.
This is rhetoric without content.
Is it? Is this same "rhetoric" against aliens also without a content? If I say that people want aliens, because they have lost angels, is this really without a content?
Not only that there is no infinite God, even infinite sets are probably just a miracle.
Generally, yes, the content level is pretty low. It essentiall...
Ah, yes, I think that makes sense. And obviously a proof of say Friendliness in ZFC is a lot better than no proof at all.
I'm not sure what you mean by this, and in so far as I can understand it doesn't seem to be true. Physicists use the real numbers all the time which are an infinite set.
The problem there is that certain specific models of physics end up giving infinite values for measurable quantities - this is a known problem and has been an area of active research since early work with renormalization in the 1930s. This is not at all an attempt to banish infinity in any general sense.
...Now, when there in no God, the Infinity is
I'm not sure what your point is here. Yes, experts sometimes have a consensus that turns out to be wrong. If one is lucky one can even turn out to be right when the experts are wrong if one takes sufficiently many contrarian positions (although the idea that many millions of civilizations in our galaxy was a universal among both biologists and astro-biologists is definitely questionable), but in this case, the experts have really thought about these ideas a lot, and haven't gotten anywhere.
If you prefer an example other than Wildberger, when Edward Nelson ...
I'm not sure that's strong evidence for the thesis in question. If ZFC had a low-lying inconsistency, ZFC+an inaccessible cardinal would still prove ZFC consistent, but it would be itself an inconsistent system that was effectively lying to you. Same remarks apply to any large cardinal axiom.
What do you mean?
Physics is only good, when you expel all the infinities out of it.
I'm not sure what you mean by this, and in so far as I can understand it doesn't seem to be true. Physicists use the real numbers all the time which are an infinite set. They use integration and differentiation which involves limits. So what do you mean?
I'm not sure why you think that. This may depend strongly on what you mean by an in infinitary method. Is induction infinitary? Is transfinite induction infinitary?
If the hypothetical external world in question diverges from our own world by a lot then the ancestor simulation argument loses all force.
Wildberger's complaints are well known, and frankly not taking very seriously. The most positive thing one can say about it is that some of the ideas in his rational trignometry do have some interesting math behind them, but that's it. Pretty much no mathematican who has listened to what he has to say have taken any of it seriously.
What you are doing in many ways amounts to the 18th and early 19th century arguments over whether 1-1+1-1+1-1... converged and if so to what. First formalize what you mean, and then get an answer. And a rough intuition of what should formally work that leads to a problem is not at all the same thing as an inconsistency in either PA or ZFC.
Phrasing it as a "super-task" relies on intuitions that are not easily formalized in either PA or ZFC. Think instead in terms of a limit, where your nth distribution and let n go to infinity. This avoids the intuitive issues. Then just ask what mean by the limit. You are taking what amounts to a pointwise limit. At this point, what matters then is that it does not follow that a pointwise limit of probability distributions is itself a probability distribution.
If you prefer a different example that doesn't obfuscate as much what is going on we can...
The limit of your distributions is not a distribution so there's no problem.
If there's any sort of inconsistency in ZF or PA or any other major system currently in use, it will be much harder to find than this. At a meta level, if there were this basic a problem, don't you think it would have already been noticed?
At least two major classes of existential risk, AI and physics experiments, are areas where a lot of math can come into play. In the case of AI, this is understanding whether hard take-offs are possible or likely and whether an AI can be provably Friendly. In the case of physics experiments, the issues connected to are analysis that the experiments are safe.
In both these cases, little attention is made to the precise axiomatic system being used for the results. Should this be concerning? If for example some sort of result about Friendliness is proven rigo...
Yes, but there's less reason for that. A big part of the problem with neutrinos is that since only a small fraction are absorbed, it becomes much harder to get good data on what is going on. For example, the typical neutrino pulse from a supernova is estimated to last 5 seconds to 30 seconds, while the Earth is under a tenth of a second in diameter. Gamma rays don't have quite as much of this problem and we can sort of estimate their directional data better.
On the other hand, the more recent work with neutrinos has been getting better and better at getting angle data which lets us get the same directional data to some extent.
You do know that both sets of ideas predate HPMOR, right?
Slightly crazy idea I've been bouncing around for a while: put giant IceCube style neutrino detectors on Mars and Europa. Europa would work really well because of all the water ice. This would allow one to get time delay data from neutrino bursts during a supernova to get very fast directional information as well as some related data.
That's a rule I'd strongly support other than in cases of absolutely unambiguous spamming or clear sockpuppets of banned individuals.
I'm upvoting this because the community could use more content commonly held views, and some people do need to treat Eliezer as more fallible than they do.
That said, I find most of your examples unpersuasive. With the exception of some aspects of p-zombies, where you do show that Eliezer has misinterpreted what people are saying when they make this sort of argument, most of your arguments are not compelling arguments at all that Eliezer is wrong, although they do point to his general overconfidence (which seems to be a serious problem).
For what it is worth... (read more)