All of warbo's Comments + Replies

warbo00

it can't, by definition, invent a new tool entirely.

Can humans "invent a new tool entirely", when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.

We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which... (read more)

0OrphanWilde
Your response ignores the constraints this line of conversation has already engendered. I'm happy to reply to you, but your response doesn't have anything to do with the conversation that has already taken place. Let's suppose the constraint on the AI being unable to update its world model applies. How can it use a tool it has just invented? It can't update its world model to include that tool. Supposing it -can- update its world model, but only in reference to new tools it has developed: How do you prevent it from inventing a tool like the psychological manipulation of the experimenters running the simulation?
warbo00

This mostly reminds me of the "tragedy of the commons", where everyone benefits when an action is taken (like irrigating land, picking up litter, etc.), but it costs some small amount to one who takes the action, such that everyone agrees that action should be taken, but nobody wants to do it themselves.

There is also the related concept of "not in my back yard" (NIMBY), where everyone agrees that some 'necessary evil' be done, like creating a new landfill site or nuclear power plant, but nobody wants to take the sacrifice themselves (ie... (read more)

0philh
This doesn't feel like either of those, to me. Tragedy of the commons and NIMBY seem like they have symmetric Nash equilibria where the total payoff is terrible. In the farmer's dilemma, the Nash equilibria are asymmetric, at D/C and C/D, and these are optimal in terms of total utility, but they're unfair.
warbo10

By an off switch I mean a backup goal. Goals are standardly regarded as immune self modification, so an off switch, in my sense, would be too.

This is quite a subtle issue.

If the "backup goal" is always in effect, eg. it is just another clause of the main goal. For example, "maximise paperclips" with a backup goal of "do what you are told" is the same as having the main goal "maximise paperclips while doing what you are told".

If the "backup goal" is a separate mode which we can switch an AI into, eg. &quo... (read more)

1TheAncientGeek
You haven't dealt with the case where the safety goals are the primary ones. These kinds of primary goals have been raised by Isaac Asimov.
warbo20

Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. NOT buying that one ticket gives you a chance of zero.

There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins.

So buying one ticket is "infinitely" better than buying no tickets.

No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), b... (read more)

warbo40

My intuition is that the described AIXItl implementation fails because it's implementation is too low-level. A higher-level AIXItl can succeed though, so it's not a limitation in AIXItl. Consider the following program:

P1) Send the current machine state* as input to a 'virtual' AIXItl.

P2) Read the output of this AIXItl step, which will be a new program.

P3) Write a back up of the current machine state*. This could be in a non-executing register, for example.

P4) Replace the machine's state (but not the backup!) to match the program provided by AIXItl.

Now, as ... (read more)

warbo10

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1.

Could you be so kind as to expand on that?

Classical logics make the assumption that all statements are either exactly true or exactly false, with no other possibility allowed. Hence classical logic will take shortcuts like admitting not(not(X)) as a proof of X, under the assumptions of consistency (we've proved not(not(X)) so there i... (read more)

warbo10

We can generalise votes to carry different weights. Starting today, everyone who currently has one vote continues to have one vote. When someone makes a copy (electronic or flesh), their voting power is divided between themselves and the copy. The total amount of voting power is conserved and, assuming that copies default to the political opinion of their prototypes, the political landscape only moves when someone changes their mind.

warbo70

We only need to perform proof search when we're given some unknown blob of code. There's no need to do a search when we're the ones writing the code; we know it's correct, otherwise we wouldn't have written it that way.

Admittedly many languages allow us to be very sloppy; we may not have to justify our code to the compiler and the language may not be powerful enough to express the properties we want. However there are some languages which allow this (Idris, Agda, Epigram, ATS, etc.). In such languages we don't actually write a program at all; we write down... (read more)

warbo00

Scheme requires tail-call optimisation, so if you use tail-recursion then you'll never overflow.

0Mestroyer
Does that include tail-calls in mutual recursion? Even if you were going to return whatever your opponent's code did, you probably couldn't count on them doing the same.
warbo20

Page 4 footnote 8 in the version you saw looks like footnote 9 in mine.

I don't see how 'proof-of-bottom -> bottom' makes a system inconsistent. This kind of formula appears all the time in Type Theory, and is interpreted as "not(proof-of-bottom)".

The 'principle of explosion' says 'forall A, bottom -> A'. We can instantiate A to get 'bottom -> not(proof-of-bottom)', then compose this with "proof-of-bottom -> bottom" to get "proof-of-bottom -> not(proof-of-bottom)". This is an inconsistency iff we can show proof-o... (read more)