Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: curi 01 November 2017 07:29:26PM 1 point [-]

Does Deutsch write anywhere about what a precise definition of "explanation" would be?

Yes, in BoI. http://beginningofinfinity.com/books

In short, explanations typically talk about why/how/because.

The words "explanatory theory" seem to me to have a lot of fuzziness hiding behind them. But to the extent that "the sun is powered by nuclear fusion" is an explanatory theory I would say that the proposition ~T is just the union of many explanatory theories: "the sun is powered by oxidisation", "the sun is powered by gravitational collapse", and so on for all explanatory theories except "nuclear fusion".

Unless you're claiming non-explanatory theories don't exist at all, then ~T includes both explanations and non-explanations. It doesn't consist of a union of many explanations.

A Bayesian might instead define theories T₁' = "quantum theory leads to approximately correct results in the following circumstances ..."

You're changed it to an instrumentalist theory which focuses on prediction instead of explanation. Deutsch refutes instrumentalism in his first book, FoR, also at the link above.

Comment author: cousin_it 06 November 2017 10:30:52AM 0 points [-]

You're changed it to an instrumentalist theory which focuses on prediction instead of explanation.

How so? I think it's still an explanatory theory, it just explains 99% of something instead of 100%.

Announcing the AI Alignment Prize

6 cousin_it 03 November 2017 03:45PM

Stronger than human artificial intelligence would be dangerous to humanity. It is vital any such intelligence’s goals are aligned with humanity's goals. Maximizing the chance that this happens is a difficult, important and under-studied problem.

To encourage more and better work on this important problem, we (Zvi Mowshowitz and Vladimir Slepnev) are announcing a $5000 prize for publicly posted work advancing understanding of AI alignment, funded by Paul Christiano.

This prize will be awarded based on entries gathered over the next two months. If the prize is successful, we will award further prizes in the future.

This prize is not backed by or affiliated with any organization.

Rules

Your entry must be published online for the first time between November 3 and December 31, 2017, and contain novel ideas about AI alignment. Entries have no minimum or maximum size. Important ideas can be short!

Your entry must be written by you, and submitted before 9pm Pacific Time on December 31, 2017. Submit your entries either as URLs in the comments below, or by email to apply@ai-alignment.com. We may provide feedback on early entries to allow improvement.

We will award $5000 to between one and five winners. The first place winner will get at least $2500. The second place winner will get at least $1000. Other winners will get at least $500.

Entries will be judged subjectively. Final judgment will be by Paul Christiano. Prizes will be awarded on or before January 15, 2018.

What kind of work are we looking for?

AI Alignment focuses on ways to ensure that future smarter than human intelligence will have goals aligned with the goals of humanity. Many approaches to AI Alignment deserve attention. This includes technical and philosophical topics, as well as strategic research about related social, economic or political issues. A non-exhaustive list of technical and other topics can be found here.

We are not interested in research dealing with the dangers of existing machine learning systems commonly called AI that do not have smarter than human intelligence. These concerns are also understudied, but are not the subject of this prize except in the context of future smarter than human intelligence. We are also not interested in general AI research. We care about AI Alignment, which may or may not also advance the cause of general AI research.

Comment author: Habryka 02 October 2017 08:15:12PM 4 points [-]

Strongly agree with 1. I have a plan for a separate thing at the top of the frontpage for logged-in users that takes up much less space and is actually useful for multiple visits. Here is a screenshot of my current UI mockup for the frontpage:

https://imgur.com/a/GXjTY

The emphasis continue to be on historical instead of recent content, with the frontpage emphasizing reading for logged-in users. If you don't have anything in your reading-queue the top part disappears completely and you just have the recent discussion (though by default the HPMOR, The Sequences and The Codex are in your reading queue)

In response to comment by Habryka on Feedback on LW 2.0
Comment author: cousin_it 05 October 2017 04:06:31PM *  3 points [-]

I think it'd be nice to have one main view that everyone visits, organized as a list of posts sorted chronologically or by magic. Writing my mathy stuff on a website and showing it to friends would be easier if the website didn't have a big banner saying go read this Harry Potter fanfiction or that social issues blogger (much as I love both HPMOR and Scott). Maybe you could put these links in a sidebar instead?

Also as a longtime user I don't really care if people have read the Sequences. I don't see much correlation between "this person has read the Sequences" and "this person is interesting" that isn't screened off by "this person was interested in stuff like the Sequences to begin with".

Comment author: cousin_it 03 October 2017 11:11:50AM *  3 points [-]

Yeah, classical computers might need a lot of resources to simulate quantum mechanics. Quantum computers have no such limitation though, so it's probably not relevant to the simulation argument. Note that the paper doesn't mention the simulation argument, it was added by journalists working under evil incentives.

Comment author: cousin_it 30 September 2017 11:21:43PM *  2 points [-]

Nice! Right now I'm faced with an exercise in catching loopholes of exactly that kind, while trying to write a newbie-friendly text on UDT. Basically I'm going through a bunch of puzzles involving perfect predictors, trying to reformulate them as crisply as possible and remove all avenues of cheating. It's crazy.

For your particular puzzle, I think you can rescue it by making the gods go into an infinite loop when faced with a paradox. And when faced with a regular non-paradoxical question, they can wait for an unknown but finite amount of time before answering. That way you can't reliably distinguish an infinite loop from an answer that's just taking a while, so your only hope of solving the problem in guaranteed finite time is to ask non-paradoxical questions. That also stops you from manipulating gods into doing stuff, I think.

Comment author: cousin_it 24 September 2017 12:07:57AM *  1 point [-]

Imagine you have two unknown bits, generated uniformly at random. What's the fastest way to learn them by asking yes or no questions?

1) Ask whether the first bit is 0, then do the same for the second bit. This way you always spend two questions.

2) First ask whether the bits are 00. If yes, you win. Otherwise ask the remaining questions in any way you like. This works out to 9/4 questions on average, which is worse than the first method.

Moral of the story: if you want to learn information as fast as possible, you must split the search space in equal parts. That's the same as saying that a uniform distribution has maximum self-information, i.e. maximum entropy.

Comment author: Stuart_Armstrong 22 September 2017 02:50:38PM 0 points [-]

I think the second thing to do is to list all the problems, and what the correct answer is/what answer the algorithms give.

Comment author: cousin_it 22 September 2017 03:41:26PM *  5 points [-]

My current outline of UDT is organized by levels:

1) Indexical uncertainty, which is solved by converting to single player games with imperfect information. This level is basically playing with graphs. Absent-Minded Driver, Wei's coordination problem, Psy-Kosh's problem. Interpreting anthropic problems as choosing the right game, like in your work.

2) Cartesian uncertainty, where your copies aren't delineated in the world and you need to find them first, then reduce the problem to level 1. This level is where self-referential sentences come in. Symmetric PD, Newcomb's Problem, Counterfactual Mugging. Models based on halting oracles, Peano arithmetic, modal logic.

3) Logical uncertainty, where you can't do level 2 crisply because your power is limited. This level is about approximations and bounds. Proof searchers, spurious counterfactuals, logical inductors, logical updatelessness.

4) Full on game theory, where even level 3 isn't enough because there are other powerful agents around. This level is pretty much warfare and chaos. Bargaining, blackmail, modal combat, agent simulates predictor.

At this point I feel that we have definitively solved levels 1 and 2, are making progress on level 3, and have a few glimpses of level 4. But even on the first two levels, writing good exposition is a challenge. I'll send you drafts as I go.

Comment author: Stuart_Armstrong 22 September 2017 01:54:04PM 0 points [-]

So do you think it's worth writing up, or getting someone to do so?

Comment author: cousin_it 22 September 2017 02:01:53PM 0 points [-]

Yeah. I've been feeling a bit guilty so I started another attempt at a writeup, in a week or two we'll see if it goes anywhere.

Comment author: Stuart_Armstrong 20 September 2017 06:49:29PM 1 point [-]

What would be required for UDT to be written up fully? And what is missing between FDT (in the Death in Damascus problem) and UDT?

Comment author: cousin_it 21 September 2017 08:39:28AM *  0 points [-]

I'm puzzled by the FDT paper, it claims to be a generalization of UDT but it seems strictly less general, the difference being this.

As to your first question, we already have several writeups that fit in the context of decision theory literature (TDT, FDT, ADT) but they omit many ideas that would fit better in a different context, the intersection of game theory and computation (like the paper on program equilibrium by Tennenholtz). Thinking back, I played a large part in developing these ideas, and writing them up was probably my responsibility which I flunked :-( Wei's reluctance to publish also played a role though, see the thread "Writing up the UDT paper" on the workshop list in Sep 2011.

Comment author: cousin_it 20 September 2017 05:25:36PM *  3 points [-]

Congratulations!

Just a quick note on another possible way to present this idea. A few years ago I realized that the simple subset of UDT can be formulated as a certain kind of single player game. It seems like the most natural way to connect UDT to standard terminology, and it's very crisp mathematically. Then one can graduate to the modal version which goes a little deeper and is just as crisp, and decidable to boot. That's the path my dream paper would take, if I didn't have a job and a million other responsibilities :-/

View more: Next