Wiki Contributions

Comments

I'm upvoting this because the community could use more content commonly held views, and some people do need to treat Eliezer as more fallible than they do.

That said, I find most of your examples unpersuasive. With the exception of some aspects of p-zombies, where you do show that Eliezer has misinterpreted what people are saying when they make this sort of argument, most of your arguments are not compelling arguments at all that Eliezer is wrong, although they do point to his general overconfidence (which seems to be a serious problem).

For what it is worth, one of my very first comments [Was objecting to Eliezer's use of phlogiston as an example of a hypothesis that did not generate predictions](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality?commentId=4Jch5m8wNg8pHrAAF). 

What does ELK stand for here?

This is probably the best argument I have seen yet for being concerned about what things like GPT are going to be able to do. Very eye opening.

66.42512077294685%

This should not be reported this way. It should be reported as something like 66%. The other digits are not meaningful.

I don't know of any broader, larger trends. It is worth noting here that the Rabbis of the Talmud themselves thought that the prior texts (especially the Torah itself) were infallible, so it seems that part of what might be happening is that over time, more and more gets put into the very-holy-text category.

Also, it seems important to distinguish here between being unquestionably correct with being literal. In a variety of different religions this becomes an important distinction and often a sacrifice of literalism is in practice made to preserve correctness of a claim past a certain point. Also note that in many religious traditions, the traditions which are most literal try to argue that what they are doing is not literalism but something more sophisticated. For example, among conservative Protestants it isn't uncommon to claim that they are not reading texts literally but rather using the "historical-grammatical method."

MWI doesn't say anything about other constants- the other parts of our wavefunction should have the same constants. However, other multiverse hypotheses do suggest that physical constants could eb different.

That seems like an accurate analysis.

I'm actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.

Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.

When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.

There was substantially more evidence that VoiceOfRa was downvoting in a retributive fashion, including database evidence.

Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn't try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.

Load More