This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.
Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.
Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don...
How much more information is in the ontogenic environment, then?
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.
One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.
TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.
Yuck.
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.
Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.
What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.
I don't understand how the bolde...
Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?
BB(100) is computable. Am I missing something?
To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.
I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover t...
But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.
Does anyone else feel like this just a weird remake of cached thoughts?
They remember being themselves, so they'd say "yes."
I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."
There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.
That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.
Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way,
Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.
In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.
I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran...
First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct
It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
If you're trying to outpaperclip SEO-paperclippers you'll need a lot better than that.
I doubt LessWrong has any competitors serious enough for SEO.
Yudkowsky.net comes up as #5 on the "rationality" search, and being surrounded by uglier sites it should stand out to anyone who looks past Wikipedia. But LessWrong is only mentioned twice, and not on the twelve virtues page that new users will see first. I think you could snag a lot of people with a third mention on that page, or maybe even a bright green logo-button.
I am an SEO. (Sometimes even we work for the Light, by the way.) Less Wrong currently isn't even trying to rank for "rationality". It's not even in the frigging home page title!
Who is the best point of contact for doing an SEO audit on Less Wrong? Who, for example, would know if the site was validated on Google Webmaster Console, and have the login details? Who is best positioned to change titles, metadata, implement redirects and so on? Woud EY need to approve changes to promotional language?
First, infer the existence of people, emotions, stock traders, the press, factories, production costs, and companies. When that's done your theory should follow trivially from the source code of your compression algorithm. Just make sure your computer doesn't decay into dust before it gets that far.
Sell patents.
(or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).
I ...
edit: here's an example:
If you're risk-neutral, you still can't just do whatever has the highest chance of being right; you must also consider the cost of being wrong. You will probably win a bet that says a fair six-sided die will come up on a number greater than 2. But you shouldn't buy this bet for a dollar if the payoff is only $1.10, even though that purchase can be summarized as "you will probably gain ten cents". That bet is better than a similarly-priced, similarly-paid bet on the opposite outcome; but it's not good.
You have a 1/3 ...
CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.