Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DragonGod 18 December 2017 05:46:24AM 0 points [-]

I think this is the reason why a distinction between subjective and objective probability is needed.

In response to What is Rational?
Comment author: DragonGod 16 December 2017 10:49:57AM 0 points [-]

It's been almost four months since I wrote this thread. I've started to see the outline of an answer to my question. Over the course of the next year, I would begin documenting it.

Comment author: thetasafe 17 May 2017 02:36:39AM 0 points [-]

Sir, please tell me if the 'pdf' you're referring to as taking out every year and asking how much safety would it buy about "Oracle AI" of Sir Nick Bostrom is the same as "Thinking inside the box: using and controlling an Oracle AI" and if so, then has your perspective changed over the years given your comment dated to August, 2008 and if in case you've been referring to a 'pdf' other than the one I came across, please provide me the 'pdf' and your perspectives along. Thank you!

Comment author: DragonGod 08 November 2017 06:47:22PM 0 points [-]

I think he was talking to pdf23ds.

Comment author: Lightwave2 30 August 2008 11:24:56AM 0 points [-]

Just like you wouldn't want an AI to optimize for only some of the humans, you wouldn't want an AI to optimize for only some of the values. And, as I keep emphasizing for exactly this reason, we've got a lot of values.

What if the AI emulates some/many/all human brains in order to get a complete list of our values? It could design its own value system better than any human.

Comment author: DragonGod 08 November 2017 04:23:46PM 0 points [-]

There is no machine in the ghost.

Comment author: Ian_C. 30 August 2008 06:10:20AM -2 points [-]

Is intelligence general or not? If it is, then an entity that can do molecular engineering but be completely naive about what humans want it *impossible.*

Comment author: DragonGod 08 November 2017 04:21:20PM 0 points [-]

Completely misses the point.

Comment author: username2 27 October 2017 09:25:30AM 1 point [-]

You could also simply continue working on the review: you are clearly motivated to explore these issues deeper so why not start fleshing out the paper?

Note that I said "continue" rather than start. The barrier is often not the ideas themselves but getting it written in something approaching a complete paper. this is still the issue for me and I have 50+ peer reviewed papers in the past 20 years (although not in this field).

Comment author: DragonGod 29 October 2017 03:52:29PM 0 points [-]

I will then.

I Want to Review FDT; Are my Criticisms Legitimate?

0 DragonGod 25 October 2017 05:28AM

I'm going to write a review of functional decision theory, I'll use the two papers.
It's going to be around as long as the papers themselves, coupled with school work, I'm not sure when I'll finish writing.
Before I start it, I want to be sure my criticisms are legitimate; is anyone willing to go over my criticisms with me?
My main points of criticism are:
Functional decision theory is actually algorithmic decision theory. It has an algorithmic view of decision theories. It relies on algorithmic equivalence and not functional equivalence.
Quick sort, merge sort, heap sort, insertion sort, selection sort, bubble sort, etc are mutually algorithmically dissimilar, but are all functionally equivalent.
If two decision algorithms are functionally equivalent, but algorithmically dissimilar, you'd want a decision theory that recognises this.
Causal dependence is a subset of algorithmic dependence which is a subset of functional dependence.
So, I specify what an actual functional decision theory would look like.
I then go on to show that even functional dependence is "impoverished".
Imagine a greedy algorithm that gets 95% of problems correct.
Let's call this greedy algorithm f'.
Let's call a correct algorithm f.
f and f' are functionally correlated, but not functionally equivalent.
FDT does not recognise this.
If f is your decision algorithm, and f' is your predictor's decision algorithm, then FDT doesn't recommend one boxing on Newcomb's problem.
EDT can deal with functional correlations.
EDT doesn't distinguish functional correlations from spurious correlations, while FDT doesn't recognise functional correlations.
I use this to specify EFDT (evidential functional decision theory), which considers P(f(π) = f'(π)) instead of P(f = f').
I specify the requirements for a full Implementation of FDT and EFDT.
I'll publish the first draft of the paper here after I'm done.
The paper would be long, because I specify a framework for evaluating decision theories in the paper.
Using this framework I show that EFDT > FDT > ADT > CDT.
I also show that EFDT > EDT.
This framework is basically a hierarchy of decision theories.
A > B means that the set of problems that B correctly decides is a subset of the set of problems that A correctly decides.
The dependence hierarchy is why CDT < ADT < FDT.
EFDT > FDT because EFDT can recognise functional correlations.
EFDT > EDT because EFDT can distinguish functional correlations from spurious correlations.
I plan to write the paper as best as I can, and if I think it's good enough, I'll try submitting it.

Comment author: Dagon 23 October 2017 03:28:54PM 0 points [-]

Your first thought:

  • Pick outcome with highest Kelly bet and bet on it consistently (I am not sure if this is the best strategy as opposed to some mixed strategy involving outcomes with different Kelly bets).

seems correct, no mixed strategy needed for games without opposing strategy.

  • Assign p<1p<1 as the probability that you would continue the game for the next round. If p=1p=1, you would be trapped in the Casino for eternity. If p<1p<1, you would almost surely leave the Casino at some point. This satisfies the requirements of eventually leaving the Casino.

This confuses me - you claim that the player is immortal and fatigue-free, and that he values money linearly with no upper bound. What's this requirement to leave? If money is NOT valuable in itself, but only in the outside world, you have to add that conversion to your Kelly calculations, including declining marginal utility, which probably means you leave when no bet has a positive Kelly bet size.

Comment author: DragonGod 23 October 2017 04:45:13PM 0 points [-]

Money is only valuable in the outside world. So you'll need to eventually leave the Casino.

You have no memory of previous rounds, so how would you evaluate the declining marginal utility of money?

[Link] Absent Minded Gambler

0 DragonGod 23 October 2017 02:42PM
Comment author: DragonGod 14 October 2017 06:19:28AM 0 points [-]

Please help me with the maths, I'm trying to do it myself (without calculus or measure theory as I haven't yet learned them), but I'm not optimistic.

View more: Next