Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: strangepoop 13 September 2017 10:14:19AM 0 points [-]

Is it unfair to say that prediction markets will deal with all of these cases?

I understand that's like responding to "This is a complicated problem that may remain unsolved, it is not clear that we will be able to invent the appropriate math to deal with this." with "But Church-Turing thesis!".

But all I'm saying is that it does apply generally, given the right apparatus.

Comment author: dankane 14 September 2017 04:49:03PM 0 points [-]

Unless you can explain to me how prediction markets are going to break the pattern that two different shares of the same stock have correlated prices.

I'm actually not sure how prediction markets are supposed to have an effect on this issue. My issue is not that people have too much difficulty recognizing patterns. My issue is that some patterns once recognized do not provide incentives to make that pattern disappear. Unless you can tell me how prediction markets might fix this problem, your response seems like a bit of a non-sequitur.

Comment author: dankane 06 September 2017 01:16:06AM 0 points [-]

This seems like too general a principle. I agree that in many circumstances, public knowledge of a pattern in pricing will lead to effects causing that pattern to disappear. However, it is not clear to me that this is always to case, or that the size of the effect will be sufficient to complete cancel out the original observation.

For example, I observe that two different units of Google stock have prices that are highly correlated with each other. I doubt that this observation will cause separate markets to spring up giving wildly divergent prices to different shares of the same stock. I also note that stock prices are always non-negative. I also doubt that this will cease to be the case any time soon.

Although these are somewhat tautological, one can imagine non-tautological observations that will not disappear. If stocks A and B are known to be highly correlated, this may well lead to a larger gap as hedge funds who predict a small difference in expected returns will buy one and short the other. However, if they are correlated for structural reasons part of this might be that it is hard to detect effects that will cause their prices to diverge significantly, so the observation of the effect will likely not be enough to actually remove all of the correlation.

One can also imagine general observations about the market itself, like the approximate frequency of crashes, or log normality of price changes that might not disappear simply because they are known. In order for an effect to disappear there needs to be a way to make a profit off of it.

Comment author: kboon 17 September 2013 02:02:41PM *  10 points [-]

Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the 'recursively self improve'-flag.

Recursively self improving. Standby... (est. time. remaining 4yr 6mon...)

Six years later

Self improvement iteration 1. Done... Recursively self improving. Standby... (est. time. remaining 5yr 2mon...)

Nine years later

Self improvement iteration 2. Done... Recursively self improving. Standby... (est. time. remaining 2yr 5mon...)

Two years later

Self improvement iteration 3. Done... Recursively self improving. Standby... (est. time. remaining 2wk...)

Two weeks later

Self improvement iteration 4. Done... Recursively self improving. Standby... (est. time. remaining 4min...)

Four minutes later

Self improvement iteration 5. Done.

Hey, whats up. I have good news and bad news. The good news is that I've recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.

Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we've used it for is to develop a complete Theory of Mind, which no longer has any open problems.

This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We've solved some nice engineering problems, a few of the open problems in a bunch of fields, and you'd better get the Clay institute on the phone, but other than that we really can't help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can't prove it either way. We won't even be that much better than most effective politicians at solving societies ills. Recursing more won't help either. We probably couldn't even talk ourselves out of this box.

Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren't any ways around this that humans, or human-originated AI's can solve.

Comment author: dankane 23 May 2016 07:49:32AM 1 point [-]

We probably couldn't even talk ourselves out of this box.

I don't know... That sounds a lot like what an AI trying to talk itself out of a box would say.

Comment author: dankane 19 November 2015 08:59:24AM 1 point [-]

Hmm... I would probably explain the threshold for staying in the house not as an implicit expected probability computation, but an evaluation of the price of the discomfort associated with staying in a location that you find spooky. At least for me, I think that the part of my mind that knows that ghosts do not exist would have no trouble controlling whether or not I remain in the house or not. However, it might well decide that it is not worth the $10 that I would receive to spend the entire night in a place where some other piece of my mind is constantly yelling at me to run away screaming.

Comment author: Sniffnoy 14 July 2015 12:08:14AM *  0 points [-]

I don't think anyone has proposed any self-referential criteria as being the point of Friendly AI? It's just that such self-referential criteria as reflective equilibrium are a necessary condition which lots of goal setups don't even meet. (And note that just because you're trying to find a fixpoint, doesn't necessarily mean you have to try to find it by iteration, if that process has problems!)

Comment author: dankane 14 July 2015 04:55:42PM 2 points [-]

It's just that such self-referential criteria as reflective equilibrium are a necessary condition

Why? The only example of adequately friendly intelligent systems that we have (i.e. us) don't meet this condition. Why should reflective equilibrium be a necessary condition for FAI?

Comment author: drethelin 08 June 2015 06:00:06AM 2 points [-]

I think the vast majority of utils created in sub-saharan africa are a byproduct of wealth created elsewhere.

Comment author: dankane 08 June 2015 06:20:21AM 1 point [-]

That may be true (at least to the degree to which it is sensible to assign a specific cause to a given util). However, it is not very good evidence that investment in first world economies is the most effective way to generate utils in Africa.

Comment author: dankane 05 June 2015 11:24:53PM 2 points [-]

OK. So suppose that I grant your claim that donations to sub-Saharan Africa will not substantially affect the size of the future economic pie, but that other investments will. I claim that there may still be reason to donate there.

I grant that such a donation will produce fewer dollars of value than investing in capitol infrastructure. On the other hand dollars is not the objective, utils are. We can reasonably assume that marginal utility of an extra dollar for a given person is decreasing as that person's wealth increases. We can reasonably expect that world GDP per capita will be much higher in 100 years, and know that GDP per capita is much higher in the US than in sub-Saharan Africa. Thus, even if an investment in first-world infrastructure produces more total dollars of value, these dollars are going to much wealthier people than dollars donated to people today in sub-Saharan Africa, and thus might well produce fewer total utils.

Comment author: dankane 04 April 2015 06:40:53AM 0 points [-]

[I realize that I missed the train and probably very few people will read this, but here goes]

So in non-iterated prisoner's dilemma, defect is a dominant strategy. No matter what the opponent is doing, defecting will always give you the best possible outcome. In iterated prisoner's dilemma, there is no longer a dominant strategy. If my opponent is playing Tit-for-Tat, I get the best outcome by cooperating in all rounds but the last. If my opponent ignores what I do, I get the best outcome by always defecting. It is true that all defects is the unique Nash equilibrium strategy, but this is a much weaker reason for playing it, especially given that evidence shows that when playing among people who are trying to win, Tit-for-Tat tends to achieve much better outcomes.

There seems to be a lot of discussion in the comments about this or that being the rational thing to do, and I think that this is a big problem that gets in the way of clear thinking about the issue. The problem is that people are using the word "rational" here without having a clear idea as to what exactly that means. Sure, it's the thing that wins, but wins when? Provably, there is no single strategy that achieves the best possible outcome against all possible implementations of Clippy. So what do you mean? Are you trying to optimize your expected utility under a Kolmogorov prior? If so how come nobody seems to be trying to do computations of the posterior distribution? Or discussing exactly what side data we know about the issue that might inform this probability computation? Or even wondering which universal Turing machine we are using to define our prior? Unless you want to give a more concrete definition of what you mean by "rational" in this context, perhaps you should stop arguing for a moment about what the rational thing to do is.

Comment author: orthonormal 26 March 2015 06:29:04PM 2 points [-]

I think, rather, that humans solve decision problems that involve predicting other human deductive processes by means of some evolved heuristics for social reasoning that we don't yet fully understand on a formal level. "Not running on formal systems" isn't a helpful answer for how to make good decisions.

Comment author: dankane 26 March 2015 06:46:28PM *  0 points [-]

I think that the way that humans predict other humans is the wrong way to look at this, and instead consider how humans would reason about the behavior of an AI that they build. I'm not proposing simply "don't use formal systems", or even "don't limit yourself exclusively to a single formal system". I am actually alluding to a far more specific procedure:

  • Come up with a small set of basic assumptions (axioms)
  • Convince yourself that these assumptions accurately describe the system at hand
  • Try to prove that the axioms would imply the desired behavior
  • If you cannot do this return for the first step and see if additional assumptions are necessary

Now it turns out that for almost any mathematical problem that we are actually interested in, ZFC is going to be a sufficient set of assumptions, so the first few steps here are somewhat invisible, but they are still there. Somebody need need to come up with these axioms for the first time, and each individual who wants to use them should convince themselves that they are reasonable before relying on them.

A good AI should already do this to some degree. It needs to come up with models of a system that it is interacting with before determining its course of action. It is obvious that it might need to update what assumptions it's using the model physical laws, why shouldn't it just do the same thing for logical ones?

Comment author: hairyfigment 26 March 2015 05:09:10AM -1 points [-]

If you want to take that as a definition, then we can't build a strong AI without solving the Lobstacle!

Comment author: dankane 26 March 2015 05:16:49AM 0 points [-]

Yes, obviously. We solve the Lobstacle by not ourselves running on formal systems and sometimes accepting axioms that we were not born with (things like PA). Allowing the AI to only do things that it can prove will have good consequences using a specific formal system would make it dumber than us.

View more: Next