Absolute Truth Revisited
Modern rationalists like those here don't seem to like questions such as "Is truth beauty and is beauty truth". However, they may have lost inferential distance to the people who posed those questions, and they may start asking questions like that again once superintelligence is created.
Simply put, the superintelligence may discover that there are multiple Universes, simulated, basement-level or at some intermediate stage (e.g. if our Universe is not being watched over by a pre-existing superintelligence, but grew from an ancient co...
https://boards.4channel.org/x/thread/36449024/ting-ting-ting-ahem-i-have-a-story-to-tell
Thanks for your thoughtful answer.
How much does it concern you that, previously in human history, "every book"/authority appears to have been systematically wrong about certain things for some reason? How many of these authors have directly experimented in physics, compared to how many just copied what someone else/ a small number of really clever scientists like Einstein said?
I guess maybe that accounts for the 1% doubt you assigned.
OK. But if you yourself state that you "certainly know" -- certainly -- that p is fixed, then you have already accounted for that particular item of knowledge.
If you do not, in fact, "certainly know" the probability of p -- as could easily be the case if you picked up a coin in a mafia-run casino or whatever -- then your prior should be 0.5 but you should also be prepared to update that value according to Bayes' Theorem.
I see that you are gesturing towards assigning also the probability that the coin is a fair coin (or generally such a coin that has a p of...
>Suppose that I have a coin with probability of heads . I certainly know that is fixed and does not change as I toss the coin. I would like to express my degree of belief in and then update it as I toss the coin.
It doesn't change, because as you said, you "certainly know" that p is fixed and you know the value of p.
So if you would like to express your degree of belief in p, it's just p.
>But let's say I'm a super-skeptic guy that avoids accepting any statement with certainty, and I am aware of the issue of parametrization...
Why do so many technophiles dislike the idea of world government?
I rarely see the concept of "world government", or governance, or a world court or any such thing, spoken of positively by anyone. That includes technophiles and futurists who are fully cognizant of and believe in the concept of a technological singularity that needs to be controlled, "aligned", made safe etc.
Solutions to AI safety usually focus on how the AI should be coded, and it seems to me that the idea of "cancelling war/ merely human economics" -- in a sense, dropping our tools whereve...
Spooky action at a distance, and the Universe as a cellular automaton
Suppose the author of a simulation wrote some code that would run a cellular automaton. Suppose further that unlike Conway's Game of Life, cells in this simulation could influence other cells that are not their immediate neighbour. This would be simple enough to code up, and the cellular automaton could still be Turing Complete, and indeed could perhaps be a highly efficient computational substrate for physics.
(Suppose that this automaton, instead of consisting of squares that would turn ...
"""The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name emergence or emergent phenomena—usually defined as the study of systems whose high-level behaviors arise or “emerge” from the interaction of many low-level elements. (Wikipedia: “The way complex systems and patterns arise out of a multiplicity of relatively simple interactions.”)
Taken literally, that description fits every phenomenon in our uni...
[C]riticism fails because the being does not have omniscient level ability to make logical inferences and resolve confusions
To develop this point: if logical inferences are the "Ethereum" to the "Bitcoin" of mere omniscience about patterns of information; or, to use a more frivolous metaphor, David Bowie's "The Next Day" in comparison to "Heroes", then I think this was a concept that was missing from OP's headline argument.
You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic
Your thought experiment, as you want it to be interpreted, is too unrealistic for it to imply a new and surprising critique of Bayesian rationality in our world. However, the title of your post implies (at least to me) that it does form such a critique.
The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.
If we interpret the thought exp...
I would like to extract the meaning of your thought experiment, but it's difficult because the concepts therein are problematic, or at least I don't think they have quite the effect you imagine.
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the g...
Absolute nonsense. (I used a different word that's too impolite to post here.)
Other commenters have already explained why, I just wanted to share an authentic reaction.
I hope you are the blackmailer when I get blackmailed in a decision theoretic situation and I'll take you to the cleaners!