This mostly reminds me of the "tragedy of the commons", where everyone benefits when an action is taken (like irrigating land, picking up litter, etc.), but it costs some small amount to one who takes the action, such that everyone agrees that action should be taken, but nobody wants to do it themselves.
There is also the related concept of "not in my back yard" (NIMBY), where everyone agrees that some 'necessary evil' be done, like creating a new landfill site or nuclear power plant, but nobody wants to take the sacrifice themselves (ie...
By an off switch I mean a backup goal. Goals are standardly regarded as immune self modification, so an off switch, in my sense, would be too.
This is quite a subtle issue.
If the "backup goal" is always in effect, eg. it is just another clause of the main goal. For example, "maximise paperclips" with a backup goal of "do what you are told" is the same as having the main goal "maximise paperclips while doing what you are told".
If the "backup goal" is a separate mode which we can switch an AI into, eg. &quo...
Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. NOT buying that one ticket gives you a chance of zero.
There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins.
So buying one ticket is "infinitely" better than buying no tickets.
No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), b...
My intuition is that the described AIXItl implementation fails because it's implementation is too low-level. A higher-level AIXItl can succeed though, so it's not a limitation in AIXItl. Consider the following program:
P1) Send the current machine state* as input to a 'virtual' AIXItl.
P2) Read the output of this AIXItl step, which will be a new program.
P3) Write a back up of the current machine state*. This could be in a non-executing register, for example.
P4) Replace the machine's state (but not the backup!) to match the program provided by AIXItl.
Now, as ...
One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1.
Could you be so kind as to expand on that?
Classical logics make the assumption that all statements are either exactly true or exactly false, with no other possibility allowed. Hence classical logic will take shortcuts like admitting not(not(X)) as a proof of X, under the assumptions of consistency (we've proved not(not(X)) so there i...
We can generalise votes to carry different weights. Starting today, everyone who currently has one vote continues to have one vote. When someone makes a copy (electronic or flesh), their voting power is divided between themselves and the copy. The total amount of voting power is conserved and, assuming that copies default to the political opinion of their prototypes, the political landscape only moves when someone changes their mind.
We only need to perform proof search when we're given some unknown blob of code. There's no need to do a search when we're the ones writing the code; we know it's correct, otherwise we wouldn't have written it that way.
Admittedly many languages allow us to be very sloppy; we may not have to justify our code to the compiler and the language may not be powerful enough to express the properties we want. However there are some languages which allow this (Idris, Agda, Epigram, ATS, etc.). In such languages we don't actually write a program at all; we write down...
Scheme requires tail-call optimisation, so if you use tail-recursion then you'll never overflow.
Page 4 footnote 8 in the version you saw looks like footnote 9 in mine.
I don't see how 'proof-of-bottom -> bottom' makes a system inconsistent. This kind of formula appears all the time in Type Theory, and is interpreted as "not(proof-of-bottom)".
The 'principle of explosion' says 'forall A, bottom -> A'. We can instantiate A to get 'bottom -> not(proof-of-bottom)', then compose this with "proof-of-bottom -> bottom" to get "proof-of-bottom -> not(proof-of-bottom)". This is an inconsistency iff we can show proof-o...
Can humans "invent a new tool entirely", when all we have to work with are a handful of pre-defined quarks, leptons and bosons? AIXI is hard-coded to just use one tool, a Turing Machine; yet the open-endedness of that tool make it infinitely inventive.
We can easily put a machine shop, or any other manufacturing capabilities, into the abstract room. We could ignore the tedious business of manufacturing and just include a Star-Trek-style replicator, which allows the AI to use anything for which... (read more)