All of ocr-fork's Comments + Replies

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

I thought that debate was about free will.

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.

Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.

1Sniffnoy
My reply to this was going to be essentially the same as my comment on bentarm's thread, so I'll just point you there.

Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.

The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.

The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don... (read more)

How much more information is in the ontogenic environment, then?

Off the top of my head:

  1. The laws of physics

  2. 9 months in the womb

  3. The rest of your organs. (maybe)

  4. Your entire childhood...

These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.

One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.

TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.

Yuck.

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

2Blueberry
No, it's not meaningless, because if it's true, the matrix's implementers could decide to intervene (or for that matter create an afterlife simulation for all of us). If it's true, there's also the possibility of the simulation ending prematurely.

it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.

You can find it by emulating the Busy Beaver.

Oh.

I feel stupid now.

EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.

0Wei Dai
The next number in the sequence is BB(2^(n+1)), not BB(2^n+1). ETA: In case more explanation is needed, it takes O(2^n) more bits to computably describe BB(2^(n+1)), even if you already have BB(2^n). (It might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.) Since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n, AIXI actually will not bet on 0 when BB(2^(n+1) comes around, and all those 0s that it does bet on are simply "wasted".

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

5gwern
Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9). So, I was right that the rates increase again in old age, but wrong about when the first spike was.

What about the agent using Solomonoff's distribution? After seeing BB(1),...,BB(2^n), the algorithmic complexity of BB(1),...,BB(2^n) is sunk, so to speak. It will predict a higher expected payoff for playing 0 in any round i where the conditional complexity K(i | BB(1),...,BB(2^n)) < 100. This includes for example 2BB(2^n), 2BB(2^n)+1, BB(2^n)^2 * 3 + 4, BB(2^n)^^^3, etc. It will bet on 0 in these rounds (erroneously, since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n), and therefore lose relative to a human.

I don't understand how the bolde... (read more)

1Wei Dai
Yes, that's the most probable explanation according to the Solomonoff prior, but AIXI doesn't just use the most probable explanation to make decisions, it uses all computable explanations that haven't been contradicted by its input yet. For example, "All 1's except for the Busy Beaver numbers up to 2^n and 2BB(2^n)" is only slightly less likely than "All 1's except for the Busy Beaver numbers up to 2^n" and is compatible with its input so far. The conditional probability of that explanation, given what it has seen, is high enough that it would bet on 0 at round 2BB(2^n), whereas the human wouldn't.

Right, and...

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable, even though the infinite sequence Σ is not computable (see computable function examples.

So why can't the universal prior use it?

5Wei Dai
Sorry, I should have used BB(2^100) as the example. The universal prior assigns the number BB(2^100) a very small weight, because the only way to represent it computably is by giving a 2^100 state Turing machine. A human would assign it a much larger weight, referencing it by its short symbolic representation. Until I write up a better argument, you might want to (assuming you haven't already) read this post where I gave a decision problem that a human does (infinitely) better than AIXI.

Umm... what about my argument that a human can represent their predictions symbolically like "P(next bit is 1)=i-th bit of BB(100)" instead of using numerals, and thereby do better than a Solomonoff predictor because the Solomonoff predictor can't incorporate this?

BB(100) is computable. Am I missing something?

1Wei Dai
Maybe... by BB I mean the Busy Beaver function Σ as defined in this Wikipedia entry.
ocr-fork-10

But PLEASE be very careful using the might-increase-the-chance-of-UFAI excuse, because without fairly bulletproof reasoning it can be used to justify almost anything.

I've read the post. That excuse is actually relevant.

To cite one field that I'm especially familiar with, consider probability and decision theory, where we went from having no concept of probability, to studies involving gambles and expected value, to subjective probability, Bayesian updating, expected utility maximization, and the Turing-machine-based universal prior, to the recent realizations that EU maximization with Bayesian updating and the universal prior are both likely to be wrong or incomplete.

I don't see how bayesian utility maximizers lack the "philosophical abilities" to discover t... (read more)

astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.

But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don't know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.

Does anyone else feel like this just a weird remake of cached thoughts?

WrongBot210

Cached thoughts are default answers to questions. Unquestioned defaults are default answers to questions that you don't know exist.

They remember being themselves, so they'd say "yes."

I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant "why do you think a vitrified brain is conscious if a book isn't."

1cousin_it
You don't know that until you've actually done the experiment. Some parts of memory may be "passive" - encoded in the configuration of neurons and synapses - while other parts may be "active", dynamically encoded in the electrical stuff and requiring constant maintenance by a living brain. To take an example we understand well, turning a computer off and on again loses all sorts of information, including its "thread of consciousness". EDIT: I just looked it up and it seems this comment has a high chance of being wrong. People have been known to come to life after having a (mostly) flat EEG for hours, e.g. during deep anaesthesia. Sorry.

Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.

5Kaj_Sotala
If I had surviving friends, then optimally the process would also extract their memories for the purpose. If we have the technology to reconstruct people like that, then surely we also have the technology to read memories off someone's brain, though it might require their permission which might not be available. If they gave their permission, though, they wouldn't be able to tell a difference since all their memories of me were used in building that copy.

There's a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.

That's orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.

0Kaj_Sotala
Sure. What about it?
-3cousin_it
No idea. We haven't yet revived any vitrified brains and asked them whether they experience personal continuity with their pre-vitrification selves. The answer could turn out either way.

Depending on how you present it you can potentially get people to keep these kinds of writings even if they don't believe it will extend their lives in any meaningful way,

Writing isn't feasible, but lifelogging might be. (see gwern's thread). The government could hand out wearable cameras that double as driving licenses, credit cards, etc. If anyone objects all they have to do rip out the right wires.

1DSimon
I object a great deal! Once we're all carrying around wearable cameras, the political possibility of making it illegal to rip out the wires would seem much less extreme than a proposal today to introduce both the cameras and the anti-tampering laws. Introducing these cameras would be greasing a slippery slope. I'd rather keep the future probability for total Orwellian surveillance low, thanks.

In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.

I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.

I'm really confused now. Also I haven't read Permutation City...

Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.

Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.

558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran... (read more)

1cousin_it
Yeah, but would a binary tree of simulated worlds "converge" as we go deeper and deeper? In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it'll do?

Until they turned it on, they thought it was the only layer.

2Blueberry
Ok, I think I see what you mean now. My understanding of the story is as follows: The story is about one particular stack of worlds which has the property that each world contains an infinitely powerful computer simulating the next world in the stack. All the worlds in the stack are deterministic and all the simulations have the same starting conditions and rules of physics. Therefore, all the worlds in the stack are identical (until someone interferes) and all beings in any of the stacks have exact counterparts in all the other stacks. Now, there may be other worlds "on top" of the stack that are different, and the worlds may contain other simulations as well, but the story is just about this infinite tower. Call the top world of this infinite tower World 0. Let World i+1 be the world that is simulated by World i in this tower. Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that world's calendar. I think your point is that in 2019 in world 1 (which is simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they're in a simulation. While this is true, it doesn't matter. It doesn't matter because the people in world 1 in 2019 (their time) are exactly identical to the people in world 0 in 2019 (world 0 time). Until the window is created (say Jan 3, 2020), they're all the same person. After the window is created, everyone is split into two: the one in world 0, and all the others, who remain exactly identical until further interference occurs. Interference that distinguishes the worlds needs to propagate from World 0, since it's the only world that's different at the beginning. For instance, suppose that the programmers in World 0 send a note to World 1 reading: "Hi, we're world 0, you're world 1." World 1 will be able to verify this since none of the other worlds will receive this note. World 1 is now different than the others as well and may continue propagating changes in this way. Now suppose that on Jan 3, 2020, the p
0khafra
I interpreted the story Blueberry's way; the inverse of the way many histories converge into a single future in Permutation City, one history diverges into many futures.

That doesn't say anything about the top layer.

1Blueberry
I don't understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.

Then it would be someone else's reality, not theirs. They can't be inside two simulations at once.

1cousin_it
But what if two groups had built such computers independently? The story is making less and less sense to me.

Then they miss their chance to control reality. They could make a shield out of black cubes.

0Baughn
They could program in an indestructible control console, with appropriate safeguards, then run the program to its conclusion. Much safer. That's probably weeks of work, though, and they've only had one day so far. Hum, I do hope they have a good UPS.
0Houshalter
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it's end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.

Why do you think deterministic worlds can only spawn simulations of themselves?

0Blueberry
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.

Of course. It's fiction.

3JoshuaZ
What I mean is that this isn't a type of fiction that could plausibly occur in our universe. In contrast for example, there's nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn't work in our universe.

With 1), you're non-cooperator and the punisher is society in general. With 2), you play both roles at different times.

First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct

It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...

4JoshuaZ
Ok, but in that case, that world in question almost certainly can't be our world. We'd have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn't our universe.

Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos.

I winced.

How is what is proposed above different from imprisoning these groups?

It's not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.

7JenniferRM
I don't understand why this is downvoted to (as of my writing) -2. This actually seemed like a pithy response that raised a fascinating twist on the general model. It was a response to Sly saying: The interesting part is that a given state can have different "choice and willpower" requirements for getting in versus getting out. This gets you back into the situation described by Holmes of punishing people in order to discourage other people from following their initial behavior, even (in the case of STDs) in the face of the inability of the punished person to "regret their way to a cure" once they've already made the mistake because they actually are infected with an "external agent" that meets basically all the criteria for disease that Yvain pointed out in the OP.

How much of a statistical correlation would you require?

Enough to justify imprisoning everyone. It depends on how long they'd stay in jail, the magnitude of the crime, etc.

I really don't care what Ben Franklin thinks.

2babblefrog
Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately. My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?

The search engines have their own incentives to avoid punishing innocent sites.

If you're trying to outpaperclip SEO-paperclippers you'll need a lot better than that.

I doubt LessWrong has any competitors serious enough for SEO.

Yudkowsky.net comes up as #5 on the "rationality" search, and being surrounded by uglier sites it should stand out to anyone who looks past Wikipedia. But LessWrong is only mentioned twice, and not on the twelve virtues page that new users will see first. I think you could snag a lot of people with a third mention on that page, or maybe even a bright green logo-button.

4mfb
Update 2012: First is Wikipedia, second Harry Potter, third a dictionary entry, and forth is yudkowsky.net. Fifth is some philosophy site, and sixth is an article "what_do_we_mean_by_rationality" from here. After that, a lot of other stuff comes. Not so bad, but it could be better.
Leonhart120

I am an SEO. (Sometimes even we work for the Light, by the way.) Less Wrong currently isn't even trying to rank for "rationality". It's not even in the frigging home page title!

Who is the best point of contact for doing an SEO audit on Less Wrong? Who, for example, would know if the site was validated on Google Webmaster Console, and have the login details? Who is best positioned to change titles, metadata, implement redirects and so on? Woud EY need to approve changes to promotional language?

0taw
SEO paperclipping is result of two forces - websites trying to get better ranks, and search engines which build up their defenses. We might have little competition for rationality, but getting through search engine filters is not as easy as it used to be a decade ago.

First, infer the existence of people, emotions, stock traders, the press, factories, production costs, and companies. When that's done your theory should follow trivially from the source code of your compression algorithm. Just make sure your computer doesn't decay into dust before it gets that far.

Sell patents.

(or more specifically, patent your invention and wait until someone else wants to use it. If this seems unethical, remember you will usually be blocking big evil corporations, not other inventors, and that the big evil corporations would always do the same thing to you if they could.)

If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb's problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.

Please see the edit I just added to the post; it seems like my wording wasn't precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb's problem).

I ... (read more)

0babblefrog
How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I'd still have a problem with this. "It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer." - Ben Franklin
ocr-fork-30

I don't think you can work towards not being offended. {according to my very narrow definition, which I now retract} It's just a gut reaction.

4ata
Get a group of friends where you constantly make (facetious) offensive remarks at one another's expense, both about individual qualities and group identifications. Eventually, having been called a filthy whatever-ethnicity or a loathsome whatever-sexual-orientation (including loathsome heterosexual, hey why not?) or a Christ-killer or a baby-eating atheist so many times, the emotional impact of such statements will be dulled, which will improve your ability to understand your actual objections and react usefully when you hear people seriously say such things. Worked for me!
3NancyLebovitz
I think it's possible to work towards not being offended by such things as remembering the times when one was accidentally offensive, by checking on whether one's standards are reasonable, and by evaluating actual risks of what the offensive thing might indicate. That doesn't mean one can or should run one's offendedness down to zero, but (depending one where you're starting), it's possible to town it down.
6mattnewport
Taking offense is a tactic in politics and social interaction however as in 'the politics of offense'. People will tend to use the tactic more when it appears to be successful.

You can choose whether to nurse your offense or not nurse it, and you can choose whether to suggest to others that they should be offended. Reactions that are involuntary in the moment itself are sometimes voluntary in the longer run.

Conversations with foreigners?

Let's all take some deep breaths.

I sense this thread has crossed a threshold, beyond which questions and criticisms will multiply faster than they can be answered.

edit: here's an example:

If you're risk-neutral, you still can't just do whatever has the highest chance of being right; you must also consider the cost of being wrong. You will probably win a bet that says a fair six-sided die will come up on a number greater than 2. But you shouldn't buy this bet for a dollar if the payoff is only $1.10, even though that purchase can be summarized as "you will probably gain ten cents". That bet is better than a similarly-priced, similarly-paid bet on the opposite outcome; but it's not good.

You have a 1/3 ... (read more)

I don't get it. Are you saying a smart, dangerous AI can't be simple and predictable? Differential equations are made of algebra, so did she mean the task is impossible? You were replying to my post, right?

2John_Maxwell
Probably not simple. The point is that for it to be predictable, you'd need a very high level of knowledge about it. More than the amount necessary to build it.

An AI that acts like people? I wouldn't buy that. It sounds creepy. Like Clippy with a soul.

0whpearson
I didn't say acts like people. I said had one aspect of humans (and dogs or other trainable animals for that matter). We don't need to add all the other aspects to make it act like a human.

What else is there to see besides humans?

6Clippy
Paperclips. Also, paperclip makers. And paperclip maker makers. And paperclip maker maker makers. And stuff for maintaining paperclip maker maker makers.
Load More