Comment author: Perplexed 17 August 2010 06:42:22AM 2 points [-]

Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness.

At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.

When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

Comment author: ocr-fork 18 August 2010 06:30:55AM 1 point [-]

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

Comment author: [deleted] 18 August 2010 06:19:32AM 0 points [-]

We're going in circles a little aren't we (my fault, I'll grant). Okay, so there are two questions:

1.) Is it a rational choice to one box? Answer: No. 2.) Is it rational to have a disposition to one box? Answer: Yes.

As mentioned earlier, I think I'm more interested in creating a decision theory than wins than one that's rational. But let's say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.

I guess my question is this: Is the whole debate over one or two boxing on Newcomb's just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?

Comment author: ocr-fork 18 August 2010 06:27:16AM 0 points [-]

I thought that debate was about free will.

Comment author: Sniffnoy 18 August 2010 12:59:44AM 2 points [-]

For any decision theory, isn't there some hypothetical where Omega can say, "I've analyzed your decision theory, and I'm giving you proposition X, such that if you act the way your decision theory believes is optimal, you will lose?" The "Omega scans your brain and tortures you if you're too rational" would be an obvious example of this.

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it. Of course this is not necessarily a realistic assumption, but that is, IINM, the problem they're trying to solve.

Comment author: ocr-fork 18 August 2010 06:22:28AM 0 points [-]

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.

Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.

Comment author: nhamann 17 August 2010 09:38:52PM *  2 points [-]

I just came across this article called "Thank God for the New Atheists," written by Michael Dowd, and I can't tell if his views are just twisted or if he is very subtly trying to convert religious folks into epistemic rationalists. Sample quotes include:

Religion Is About Right Relationship with Reality, Not the Supernatural

...

Because the New Atheists put their faith, their confidence, in an evidentially formed and continuously tested view of the world, these critics of religion are well positioned to see what’s real and what’s important today. It is thus time for religious people to listen to the New Atheists—and to listen as if they were speaking with God's voice, because in my view they are!

...

...we cannot understand religion and religious differences if we don’t understand how the human mind instinctually relationalizes—that is, personifies—reality.

...

God is still speaking, and facts are God’s native tongue—not Hebrew or Greek or King James English.

Ah, yes. The only way to true religious understanding is through science and realizing our anthropomorphic biases...uh, wait. What? This guy seems to be calling for a religion grounded in science and rationality, but then he says things like:

The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God

So I'm confused. It makes me think that he's a crypto-rationalist trying to convert religious believers into rationalists. If that's true, it does seem like really effective strategy.

Comment author: ocr-fork 18 August 2010 06:07:14AM 3 points [-]

Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.

The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.

The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don't even need it...

He's also written a book called "Thank God for Evolution," in which he sprays God all over science to make it more palatable to christians.

I dedicate this book to the glory of God. Not any "God" me may think about , speak about , believe in , or deny , but the one true God we all know and experience.

If he really is trying to deconvert people, I suspect it won't work. They won't take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.

Comment author: Jonathan_Graehl 17 August 2010 06:51:40PM *  4 points [-]

A HN post mocks Kurzweil for claiming the length of the brain's "program" is mostly due to the part of the genome that affects it. This was discussed here lately. How much more information is in the ontogenic environment, then?

The top rated comment makes extravagant unsupported claims about the brain being a quantum computer. This drives home what I already knew: many highly rated HN comments are of negligible quality.

PZ Myers:

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!

(PZ Myers wrongly accuses Kurzweil of claiming he or others will simulate a human brain aided in large part by the sequenced genome, by 2020).

Kurzweil's denial - thanks Furcas - answers my question this way: only a small portion of the information in the brain's initial layout is due to the epigenetic pre-birth environment (although the evidence behind this belief wasn't detailed).

Comment author: ocr-fork 18 August 2010 05:38:32AM *  3 points [-]

How much more information is in the ontogenic environment, then?

Off the top of my head:

  1. The laws of physics

  2. 9 months in the womb

  3. The rest of your organs. (maybe)

  4. Your entire childhood...

These are barriers developing Kurzweil's simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That's idiotic.

Comment author: NancyLebovitz 06 August 2010 03:06:07PM 2 points [-]

AI development in the real world?

As a result, a lot of programmers at HFT firms spend most of their time trying to keep the software from running away. They create elaborate safeguard systems to form a walled garden around the traders but, exactly like a human trader, the programs know that they make money by being novel, doing things that other traders haven't thought of. These gatekeeper programs are therefore under constant, hectic development as new algorithms are rolled out. The development pace necessitates that they implement only the most important safeguards, which means that certain types of algorithmic behavior can easily pass through. As has been pointed out by others, these were "quotes" not "trades", and they were far away from the inside price - therefore not something the risk management software would be necessarily be looking for. -- comment from gameDevNYC

I can't evaluate whether what he's saying is plausible enough for science fiction-- it's certainly that-- or likely to be true.

Comment author: ocr-fork 14 August 2010 08:08:59PM 0 points [-]

One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.

TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.

Yuck.

Comment author: WrongBot 31 July 2010 12:50:26AM 0 points [-]

A general question about decision theory:

Is it possible to assign a non-zero prior probability to statements like "my memory has been altered", "I am suffering from delusions", and "I live in a perfectly simulated matrix"?

Apologies if this has been answered elsewhere.

Comment author: ocr-fork 31 July 2010 12:57:05AM 2 points [-]

The first two questions aren't about decisions.

"I live in a perfectly simulated matrix"?

This question is meaningless. It's equivalent to "There is a God, but he's unreachable and he never does anything."

Comment author: Wei_Dai 30 July 2010 12:40:04AM *  0 points [-]

The next number in the sequence is BB(2^(n+1)), not BB(2^n+1).

ETA: In case more explanation is needed, it takes O(2^n) more bits to computably describe BB(2^(n+1)), even if you already have BB(2^n). (It might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.)

Since K(BB(2^(n+1)) | BB(2^n)) > 100 for large n, AIXI actually will not bet on 0 when BB(2^(n+1) comes around, and all those 0s that it does bet on are simply "wasted".

Comment author: ocr-fork 30 July 2010 01:28:34AM 0 points [-]

it might take O(2^n) more bits to describe BB(2^n+1) as well, but I wasn't sure so I used BB(2^(n+1)) in my example instead.

You can find it by emulating the Busy Beaver.

Comment author: Wei_Dai 29 July 2010 11:35:59PM *  1 point [-]

Yes, that's the most probable explanation according to the Solomonoff prior, but AIXI doesn't just use the most probable explanation to make decisions, it uses all computable explanations that haven't been contradicted by its input yet. For example, "All 1's except for the Busy Beaver numbers up to 2^n and 2BB(2^n)" is only slightly less likely than "All 1's except for the Busy Beaver numbers up to 2^n" and is compatible with its input so far. The conditional probability of that explanation, given what it has seen, is high enough that it would bet on 0 at round 2BB(2^n), whereas the human wouldn't.

Comment author: ocr-fork 30 July 2010 12:23:53AM *  0 points [-]

Oh.

I feel stupid now.

EDIT: Wouldn't it also break even by predicting the next Busy Beaver number? "All 1's except for BB(1...2^n+1)" is also only slightly less likely. EDIT: I feel more stupid.

Comment author: gwern 06 July 2010 07:13:30AM 3 points [-]

adults have simply forgotten for the sake of their sanity?

not completely silly.

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

(Also, I've seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)

Comment author: ocr-fork 29 July 2010 11:35:30PM *  6 points [-]

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

View more: Next