(Sorry for the late response, I hadn't checked my LW inbox much since my previous comments.)
If it were the case that such a function exists but cannot possibly be implemented (any implementation would be implementation as a state), and no other function satisfying the same constraints could possibly be implemented, that seems like it would be a case of it being impossible to have the aligned ASI. (Again, not that I think this is the case, just considering the validity of argument.)
The function that is being demonstrated to exist is the lookup table that produces the appropriate actions, yes? The one that is supposed to be implementable by a finite depth circuit?
It seems to make sense that if hiring an additional employee provides marginal shareholder value, that the company will hire additional employees. So, when the company stops hiring employees, it seems reasonable that this is because the marginal benefit of hiring an additional employee is not positive. However, I don't see why this should suggest that the company is likely to hire an employee that provides a marginal value of 0 or negative.
"Number of employees" is not a continuous variable. When hiring an additional employee, how this changes what the marginal benefit of an additional employee can be large enough to change it from positive to negative.
Of course, when making a hiring decision, the actual marginal benefit isn't known, but something one has a belief about how likely the hire is to provide each different amount of value. I suppose then one can just consider the marginal expected benefit or whatever wherever I said "marginal benefit". Though I guess there's also something to be said there about appetite-for-risk or whatever.
I guess there's the possibility that:
1) the marginal expected benefit of hiring a certain potential new employee is strictly positive
2) it turns out that the actual marginal benefit of employing that person is negative
3) it turns out to be difficult for the company to determine/notice that they would be better off without that employee
and that this could result in the company accumulating employees/positions it would be better off not having?
Not if the point of the argument is to establish that a superintelligence is compatible with achieving the best possible outcome.
Here is a parody of the issue, which is somewhat unfair and leaves out almost all of your argument, but which I hope makes clear the issue I have in mind:
"Proof that a superintelligence can lead to the best possible outcome: Suppose by some method we achieved the best possible outcome. Then, there's no properties we would want a superintelligence to have beyond that, so let's call however we achieved the best possible outcome, 'a superintelligence'. Then, it is possible to have a superintelligence produce the best possible outcome, QED."
In order for an argument to be compelling for the conclusion "It is possible for a superintelligence to lead to good outcomes." you need to use a meaning of "a superintelligence" in the argument, such that the statement "It is possible for a superintelligence to lead to good outcomes", when interpreted with that meaning of "a superintelligence", produces the meaning you want that sentence to have? If I argue "it is possible for a superintelligence, by which I mean computer with a clock speed faster than N, to lead to good outcomes", then, even if I convincingly argue that a computer with a clock speed faster than N can lead to good outcomes, that shouldn't convince people that it is possible for a superintelligence, in the sense that they have in mind (presumably not defined as "a computer with a clock speed faster than N"), is compatible with good outcomes.
Now, in your argument you say that a superintelligence would presumably be some computational process. True enough! If you then showed that some predicate is true of every computational process, you would then be justified in concluding that that predicate is (presumably) true of every possible superintelligence. But instead, you seem to have argued that a predicate is true of some computational process, and then concluded that it is therefore true of some possible superintelligence. This does not follow.
Yes, I knew the cardinalities in question were finite. The point applies regardless though. For any set X, there is no injection from 2^X to X. In the finite case, this is 2^n > n for all natural numbers n.
If there are N possible states, then the number of functions from possible states to {0,1} is 2^N , which is more than N, so there is some function from the set of possible states to {0,1} which is not implemented by any state.
If your argument is, "if it is possible for humans to produce some (verbal or mechanical) output, then it is possible for a program/machine to produce that output", then, that's true I suppose?
I don't see why you specified "finite depth boolean circuit".
While it does seem like the number of states for a given region of space is bounded, I'm not sure how relevant this is. Not all possible functions from states to {0,1} (or to some larger discrete set) are implementable as some possible state, for cardinality reasons.
I guess maybe that's why you mentioned the thing along the lines of "assume that some amount of wiggle room that is tolerated" ?
One thing you say is that the set of superintelligences is a subset of the set of finite-depth boolean circuits. Later, you say that a lookup table is implementable as a finite-depth boolean circuit, and say that some such lookup table is the aligned superintelligence. But, just because it can be expressed as a finite-depth boolean circuit, it does not follow that it is in the set of possible superintelligences. How are you concluding that such a lookup table constitutes a superintelligence? It seems
Now, I don't think that "aligned superintelligence" is logically impossible, or anything like that, and so I expect that there mathematically-exists a possible aligned-superintelligence (if it isn't logically impossible, then by model existence theorem, there exists a model in which one exists... I guess that doesn't establish that we live in such a model, but whatever).
But I don't find this argument a compelling proof(-sketch).
Yes. I believe that is consistent with what I said.
"not((necessarily, for each thing) : has [x] -> those [x] are such that P_1([x]))"
is equivalent to, " (it is possible that something) has [x], but those [x] are not such that P_1([x])"
not((necessarily, for each thing) : has [x] such that P_2([x]) -> those [x] are such that P_1([x]))
is equivalent to "(it is possible that something) has [x], such that P_2([x]), but those [x] are not sure that P_1([x])" .
The latter implies the former, as (A and B and C) implies (A and C), and so the latter is stronger, not weaker, than the former.
Right?
Doesn't "(has preferences, and those preferences are transitive) does not imply (completeness)" imply (has preferences) does not imply (completeness)" ? Surely if "having preferences" implied completeness, then "having transitive preferences" would also imply completeness?
"Political category" seems, a bit strong? Like, sure, the literal meaning of "processed" is not what people are trying to get at. But, clearly, "those processing steps that are done today in the food production process which were not done N years ago" is a thing we can talk about. (by "processing step" I do not include things like "cleaning the equipment", just steps which are intended to modify the ingredients in some particular way. So, things like, hydrogenation. This also shall not be construed as indicating that I think all steps that were done N years ago were better than steps done today.)
For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).
No, the thing about prefixes is about what strings encode a program, not about their outputs.
The purpose of this is mostly just to define a prior over possible programs, in a way that conveniently ensures that the total probability assigned over all programs is at most 1. Seeing as it still works for different choices of language, it probably doesn't need to exactly use this kind of defining the probabilities, and I think any reasonable distribution over programs will do (at least, after enough observations)
But, while I think another distribution over programs should work, this thing with the prefix-free language is the standard way of doing it, and there are reasons it is nice.
The analogy for a normal programming language would be if no python script was a prefix of any other python script (which isn't true of python scripts, but could be if they were required to end with some "end of program" string)
There will be many different programs which produce the exact same output when run, and will all be considered when doing Solomonoff induction.
The programs in A have 5 bits of Kolmogorov complexity each. The programs in B have 6 bits. The program C has 4
This may be pedantic of me, but I wouldn't call the lengths of the programs, the Kolmogorov complexity of the program. The lengths of the programs are (upper bounds on) the Kolmogorov complexity of the outputs of the programs. The Kolmogorov complexity of a program g, would be the length of the shortest program which outputs the program g, not the length of g.
When you say that program C has 4 bits, is that just a value you picked, or are you obtaining that from somewhere?
Also, for a prefix-free programming language, you can't have 2^5 valid programs of length 5, and 2^6 programs of length 6, because if all possible binary strings of length 5 were valid programs, then no string of length 6 would be a valid program.
This is probably getting away from the core points though
(You could have the programming language be such that, e.g. 00XXXXX outputs the bits XXXXX, and 01XXXXXX outputs the bits XXXXXX, and other programs start with a 1, and any other program might want to encode is somehow encoded using some scheme)
the priors for each models will be 2^-5 for model A, 2^-6 for model B and 2^-4 for model C, according to their Kolmogorov complexity?
yeah, the (non-normalized) prior for each will be 2^(-(length of a program which directly encodes a 5 bit string to output)) for the programs which directly encode some 5 bit string and output it, 2^(-(length of a program which directly encodes a 6 bit string to output)) for the programs which directly encode some 6 bit string and output it, and (say) 2^(-4) for program C.
And those likelihoods you gave are all correct for those models.
So, then, the posteriors (prior to normalization)
would be 2^(-(length of a program which directly encodes a 5 bit string to output)) (let's say this is 2^(-7) for the program that essentially is print("HTHHT"),
2^(-(length of a program which directly encodes a 6 bit string to output)) (let's say this is 2^(-8) ) for the programs that essentially are print("HTHHTH") and print("HTHHTT") respectively
2^(-4) * 2^(-5) = 2^(-9) for model C.
If we want to restrict to these 4 programs, then, adding these up, we get 2^(-7) + 2^(-8) + 2^(-8) + 2^(-9) = 2^(-6) + 2^(-9) = 9 * 2^(-9), and dividing that, we get
(4/9) chance for the program that hardcodes HTHHT (say, 0010110)
(2/9) chance for the program that hardcodes HTHHTH (say, 01101101)
(2/9) chance for the program that hardcodes HTHHTT (say, 01101100)
(1/9) chance for the program that produces a random 5 bit string. (say, 1000)
So, in this situation, where we've restricted to these programs, the posterior probability distribution for "what is the next bit" would be
(4/9)+(1/9)=(5/9) chance that "there is no next bit" (this case might usually be disregarded/discared, idk.)
(2/9) chance that the next bit is H
(2/9) chance that the next bit is T
If you are interested in convincing people who so far think "It is impossible for the existence of an artificial superintelligence to produce desirable outcomes" otherwise, you should have a meaning of "an aritifical superintelligence" in mind that is like what they mean by it.
If one suspects that it is impossible for an artificial superintelligence to produce desirable outcomes, then when one considers "among possible futures, the one(s) that have as good or better outcomes than any other possible future", one would suppose that these perhaps are not ones that contain superintelligences. And, so, one would suppose that the computational process that achieves the best outcome, would perhaps not be a superintelligence.
To convince such a person otherwise, you would have to establish that some properties that they consider characteristic of something being a superintelligence (which would probably be something like "is more intelligent and competent than any human" for some specified sense of "intelligent") is compatible with achieving good (or maximally good) outcomes.
If someone suspects that [insert name of some not-particularly-well-defined political ideology here] can't ever lead to good outcomes, it would not convince them otherwise to go through the same argument except with "government procedure" or whatever in place of the actuators and such of the computer program, and say that:
, they would not find this compelling in the slightest! They would object that [insert aforementioned name of a not very well defined ideology] generally has properties P and Q, and that you haven't established that the P or Q are compatible with achieving the best that a government can achieve.
This would still be the case if P and Q are somewhat fuzzy concepts without a clear consensus on how to make them precise.
And, they would be right to object to this. As, indeed, the argument does not demonstrate for even one single particular way of making P or Q precise show that such a precise-ification of it is compatible with the government reaching the best results that a government can obtain.
______
To answer your question: for something to count as ASI in a reasonable sense of ASI, then it must be, for some reasonable sense of "more intelligent", more intelligent than any human.
If someone picked a sense of "more intelligent" that I considered reasonable, and demonstrated that having a computer program which is in that sense of "more intelligent" is more intelligent than all humans, isn't incompatible with achieving the best possible outcomes, then I would say that, for a reasonable sense of "ASI", they have demonstrated that there being an ASI is compatible with achieving the best possible outcome. (I might even say that they have demonstrated that, for that sense of ASI, that it is possible for an ASI to be aligned, though for that I think I might require that it be possible for the ASI (in that sense of ASI) to produce the outcome, not just be around at the same time.)