So what if p(H) = 1, p(H|A) = .4, p(H|B) = .3, and p(H|C) = .3? The evidence would suggest all are wrong. But I have also determined that A, B, and C are the only possible explanations for H. Clearly there is something wrong with my measurement, but I have no method of correcting for this problem.
Wait, how would you get P(H) = 1?
Perhaps this is the wrong venue, but I'm curious how this work generalizes and either applies or doesn't apply to other lines of research.
Schmidhuber's group has several papers on "Goedel machines" and it seems like they involve the use of proofs to find self-rewrites.
We present the first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers. Inspired by Kurt Gödel's celebrated self-referential formulas (1931), a Gödel machine (or 'Goedel machine' but not 'Godel machine') rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code.
Their 2005 paper "Completely Self-Referential Optimal Reinforcement Learners" explained the design and their 2012 paper "Towards an Actual Gödel Machine Implementation" seems to be working towards making something vaguely practical. This is the same group whose PhD students helped founded of DeepMind (like Shane Legg as founder and several others as very early employees). Deepmind was then acquired by Google in 2014.
Since that architecture uses a theorem proving system, and creates new versions of itself, and can even replace its own theorem proving system, it naively seems like the Löbstacle might come up. Are you familiar with Schmidhuber's group's work? Does it seem like their work will run into the Löbstacle and they're just not talking about it? Or does it seem like their architecture makes worries about the Löbstacle irrelevant via some clever architecting?
Basically, my question is "The Löbstacle and Gödel Machines... what's up with them?" :-)
This is several months too late, but yes! Gödel Machines runs into the Löbstacle, as seen in this MIRI paper. From the paper:
it is clear that the obstacles we have encountered apply to Gödel machines as well. Consider a Gödel machine G1 whose fallback policy would “rewrite” it into another Gödel machine G2 with the same suggester (proof searcher, in Schmidhuber’s terminology). G1’s suggester now wants to prove that it is acceptable to instead rewrite itself into G0 2 , a Gödel machine with a very slightly modified proof searcher. It must prove that G0 2 will obtain at least as much utility as G2. In order to do so, naively we would expect that G0 2 will again only execute rewrites if its proof searcher has shown them to be useful; but clearly, this runs into the Löbian obstacle, unless G1 can show that theorems proven by G0 2 are in fact true.
There are a couple of problems here. First is the usual thing forgotten on LW -- costs. "More information" is worthwhile iff its benefits outweigh the costs of acquiring it. Second, your argument implies that, say, attempting to read the entire Wikipedia (or Encyclopedia Britannica if you are worried about stability) from start to finish would be a rational thing to do. Would it?
No, it isn't. Being curious is a good heuristic for most people, because most people are in the region where information gathering is cheaper than the expected value of gathering information. I don't think we disagree on anything concrete: I don't claim that it's rational in itself a priori but is a fairly good heuristic.
I agree denotationally, but object connotatively with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.
I think that Eliezer has disavowed using this statement precisely because of the connotations that people associate with it.
It is because of this that rationality is often considered to be split into two parts: normative and descriptive rationality.
What happened to prescriptive rationality?
Rationality maximizes expected performance
Hm. Since this is a core definition, I have an urge to examine it very carefully. First, "performance" is a bit fuzzy, would you mind if I replaced it with utility? We would get "rationality maximizes expected utility". I think that I have a few questions about that.
Rationality maximizes. That implies that every rational action must maximize utility. Anything that does not maximize utility is not (fully) rational. In particular, satisficing is not rational.
Rationality maximizes expected utility. A great deal of heavy lifting is done by this word and there are some traps here. For example, if you define utility as "that what you want" and add a little bit about revealed preferences, we would get caught in a loop: you maximize what you want and how do we know what you want? why, that is what you maximize. In general most every action maximizes some utility and, moreover, there is no requirement for the utility function to be stable across time, so this gets complicated quite fast.
Rationality maximizes expected utility. At issue here are risk considerations. You can wave them away by saying that one should maximize risk-adjusted utility, but in practice this is a pretty big blind spot. Faced with estimated distributions of future utility, most people would pick one with the highest mean (they pick the maximum expected value), but that ignores the width of the distributions which is rarely a good idea.
Take curiosity. It's an accepted rationalist virtue. And yet I don't see how it maximizes expected utility.
I'm not sure if this is correct, but my best guess is:
It maximizes utility, in so far as most goals are better achieved with more information, and people tend to systematically underestimate the value of collecting more information or suffer from biases that prevent them from acquiring this information. Or, in other words, curiosity is virtuous because humans are bounded and flawed agents, and it helps rectify the biases that we fall prey to. Just like being quick to update on evidence is a virtue, and scholarship is a virtue.
EY, I'm not sure I'm with you about needing to get smarter to integrate all new experiences. If we want to stay and slay every monster, couldn't we instead allow ourselves to forget some experiences, and to not learn at maximum capacity?
It does seem wrong to willfully not learn, but maybe as a compromise, I could learn all that my ordinary brain allows, then allow that to act as a cap and not augment my intelligence until that level of challenges fully bored me. I could maybe even learn new things while forgetting others to make space.
Or am I merely misunderstanding something about how brains work?
My motivation for taking this tack is that I find the fun of making art and of telling stories more compelling than the fun of learning; therefore, I'm not inclined to learn as fast as possible, if it means skipping over other fun; I'm also disinclined to become so competent that I'm alienated from the hardships/imperfections that give my life a story / allow me to enjoy stories.
Yes, I think he recognizes this in this post. He also writes about this (from a slightly different perspective) in high challenge.
Results from the Good Judgment Project suggest that putting people into teams lets them significantly outperform (have lower Brier's scores than) predictions from both (unweighted) averaging of probabilities and the (admittedly also unweighted) averaging of probability estimates from the better portion of predictors. This seems to offer weak evidence that what goes on in a group is not simple averaging.
Lots I agree with here. I was suprised to see basic income in your clustering above. As much as I think Cuban's are the ones doing socialism wrong, and everyone doing socialism less, like Venezuala isn't socialist enough, I'm right wing and mindkilled enough to have rejected basic income using general right wing arguments and assumptions until I read the consistency of positive examples on the Wikipedia page. The straw that broke the camels back was that there is right wing support for basic income. That being said, I'm confident that I would pass ideological turing tests.
That being said, I'm confident that I would pass ideological turing tests.
Cool! You can try taking them here: http://blacker.caltech.edu/itt/
Wow, that was a long survey. Done! I'm not sure how good my answers were, like others mentioned a lot of the questions felt underspecified.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Can you give a link to posts showing elitism in EA that weren't written in response to this one?