I thought that debate was about free will.
This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.
Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.
Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don...
How much more information is in the ontogenic environment, then?
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.
One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.
TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.
Yuck.
The first two questions aren't about decisions.
"I live in a perfectly simulated matrix"?
This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."
it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.
You can find it by emulating the Busy Beaver.
Oh.
I feel stupid now.
EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.
Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.
Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.
What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.
I don't understand how the bolde...
Right, and...
A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.
So why can't the universal prior use it?
Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?
BB(100) is computable. Am I missing something?
But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.
I've read the post. That excuse is actually relevant.
To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.
I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover t...
astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.
But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.
Does anyone else feel like this just a weird remake of cached thoughts?
Cached thoughts are default answers to questions. Unquestioned defaults are default answers to questions that you don't know exist.
They remember being themselves, so they'd say "yes."
I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."
Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.
There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.
That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.
Is a vitrified brain conscious?
Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way,
Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.
In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.
I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
I'm really confused now. Also I haven't read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran...
Until they turned it on, they thought it was the only layer.
That doesn't say anything about the top layer.
Then it would be someone else's reality, not theirs. They can't be inside two simulations at once.
Then they miss their chance to control reality. They could make a shield out of black cubes.
Why do you think deterministic worlds can only spawn simulations of themselves?
Of course. It's fiction.
With 1), you're non-cooperator and the punisher is society in general. With 2), you play both roles at different times.
First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct
It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos.
I winced.
How is what is proposed above different from imprisoning these groups?
It's not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.
Regret doesn't cure STDs.
How much of a statistical correlation would you require?
Enough to justify imprisoning everyone. It depends on how long they'd stay in jail, the magnitude of the crime, etc.
I really don't care what Ben Franklin thinks.
The search engines have their own incentives to avoid punishing innocent sites.
If you're trying to outpaperclip SEO-paperclippers you'll need a lot better than that.
I doubt LessWrong has any competitors serious enough for SEO.
Yudkowsky.net comes up as #5 on the "rationality" search, and being surrounded by uglier sites it should stand out to anyone who looks past Wikipedia. But LessWrong is only mentioned twice, and not on the twelve virtues page that new users will see first. I think you could snag a lot of people with a third mention on that page, or maybe even a bright green logo-button.
I am an SEO. (Sometimes even we work for the Light, by the way.) Less Wrong currently isn't even trying to rank for "rationality". It's not even in the frigging home page title!
Who is the best point of contact for doing an SEO audit on Less Wrong? Who, for example, would know if the site was validated on Google Webmaster Console, and have the login details? Who is best positioned to change titles, metadata, implement redirects and so on? Woud EY need to approve changes to promotional language?
First, infer the existence of people, emotions, stock traders, the press, factories, production costs, and companies. When that's done your theory should follow trivially from the source code of your compression algorithm. Just make sure your computer doesn't decay into dust before it gets that far.
Sell patents.
(or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).
I ...
I don't think you can work towards not being offended. {according to my very narrow definition, which I now retract} It's just a gut reaction.
You can choose whether to nurse your offense or not nurse it, and you can choose whether to suggest to others that they should be offended. Reactions that are involuntary in the moment itself are sometimes voluntary in the longer run.
Conversations with foreigners?
Let's all take some deep breaths.
I sense this thread has crossed a threshold, beyond which questions and criticisms will multiply faster than they can be answered.
Which will be soon, right?
edit: here's an example:
If you're risk-neutral, you still can't just do whatever has the highest chance of being right; you must also consider the cost of being wrong. You will probably win a bet that says a fair six-sided die will come up on a number greater than 2. But you shouldn't buy this bet for a dollar if the payoff is only $1.10, even though that purchase can be summarized as "you will probably gain ten cents". That bet is better than a similarly-priced, similarly-paid bet on the opposite outcome; but it's not good.
You have a 1/3 ...
I don't get it. Are you saying a smart, dangerous AI can't be simple and predictable? Differential equations are made of algebra, so did she mean the task is impossible? You were replying to my post, right?
An AI that acts like people? I wouldn't buy that. It sounds creepy. Like Clippy with a soul.
What else is there to see besides humans?
CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.