harfe

Wiki Contributions

Comments

Sorted by
harfe72

insofar as the simplest & best internal logical-induction market traders have strong beliefs on the subject, they may very well be picking up on something metaphysically fundamental. Its simply the simplest explanation consistent with the facts.

Theorem 4.6.2 in logical induction says that the "probability" of independent statements does not converge to or , but to something in-between. So even if a mathematician says that some independent statement feels true (eg some objects are "really out there"), logical induction will tell him to feel uncertain about that.

harfe60

A related comment from lukeprog (who works at OP) was posted on the EA Forum. It includes:

However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.

harfe40

it's for the sake of maximizing long-term expected value.

Kelly betting does not maximize long-term expected value in all situations. For example, if some bets are offered only once (or even a finite amount), then you can get better long-term expected utility by sometimes accepting bets with a potential "0"-Utility outcome.

harfe40

This is maybe not the central point, but I note that your definition of "alignment" doesn't precisely capture what I understand "alignment" or a good outcome from AI to be:

‘AGI’ continuing to exist

AGI could be very catastrophic even when it stops existing a year later.

eventually

If AGI makes earth uninhabitable in a trillion years, that could be a good outcome nonetheless.

ranges that existing humans could survive under

I don't know whether that covers "humans can survive on mars with a space-suit", but even then, if humans evolve/change to handle situations that they currently do not survive under, that could be part of an acceptable outcome.

harfe30

it is the case that most algorithms (as a subset in the hyperspace of all possible algorithms) are already in their maximally most simplified form. Even tiny changes to an algorithm could convert it from 'simplifiable' to 'non-simplifiable'.

This seems wrong to me: For any given algorithm you can find many equivalent but non-simplified algorithms with the same behavior, by adding a statement to the algorithm that does not affect the rest of the algorithm (e.g. adding a line such as foobar1234 = 123 in the middle of a python program)). In fact, I would claim that the majority python programs on github are not in their "maximally most simplified form". Maybe you can cite the supposed theorem that claims that most (with a clearly defined "most") algorithms are maximally simplified?

harfe12

This is not a formal definition.

Your English sentence has no apparent connection to mathematical objects, which would be necessary for a rigorous and formal definition.

harfe30

I think you are broadly right.

So we're automatically giving ca. higher probability – even before applying the length penalty .

But note that under the Solomonoff prior, you will get another penalty for these programs with DEADCODE. So with this consideration, the weight changes from (for normal ) to (normal plus DEADCODE versions of ), which is not a huge change.

For your case of "uniform probability until " I think you are right about exponential decay.

harfe12

I have doubts that the claim about "theoretically optimal" apply to this case.

Now, you have not provided a precise notion of optimality, so the below example might not apply if you come up with another notion of optimality or assume that voters collude with each other, or use a certain decision theory, or make other assumptions... Also there are some complications because the optimal strategy for each player depends on the strategy of the other players. A typical choice in these cases is to look at Nash-equilibria.

Consider three charities A,B,C and two players X,Y who can donate $100 each. Player X has utilities , , for the charities A,B,C. Player Y has utilities , , for the charities A,B,C.

The optimal (as in most overall utility) outcome would be to give everything to charity B. This would require that both players donate everything to charity B. However, this is not a Nash-equilibrium, as player X has an incentive to defect by giving to A instead of B and getting more utility.

This specific issue is like the prisoners dilemma and could be solved with other assumptions/decision theories.

The difference between this scenario and the claims in the literature might be that public goods is not the same as charity, or that the players cannot decide to keep the funds for themselves. But I am not sure about the precise reasons.

Now, I do not have an alternative distribution mechanism ready, so please do not interpret this argument as serious criticism of the overall initiative.

harfe40

There is also Project Quine, which is a newer attempt to build a self-replicating 3D printer

Load More