Of course the evidence will never be very communicable to a wide audience
Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who've had experiences and report back their findings.
From this list
It follows from the assumption that you're not Bill Gates, don't have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.
the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when "you" are considered to be controlling the choices...
An example of a computation that runs most algorithms is a mathematical formalism called Solomonoff induction.
Solomonoff Induction is uncomputable, so it's not a computation. Would be correct if you had written
An example of a computation that runs most algorithms could be some program that approximates a mathematical formalism Solomonoff induction.
Also, strictly speaking no real-world computation could run "most" algorithms, since there are infinitely many and it could only run a finite number. It would make more sense to use an expression like "computations that search through the space of all possible algorithms".
A function that could evaluate an algorithm and return 0 only if it is not a person is called a nonperson predicate. Some algorithms are obviously not people. Some algorithms are obviously not people. For example, any algorithm whose output is repeating with a period less than gigabytes...
Is this supposed to be about avoiding the algorithms simulating suffering people, or avoiding them doing something dangerous to the outside world? Obviously an algorithm could simulate a person while still having a short output, so I'm thinking it has to be about the s...
So you're searching for "the most important thing", and reason that this is the same as searching for some utility function, and then you note that one reason this question seems worth thinking about is because it's interesting, and then you refer to Schmidhuber's definition of interestingness (which would yield a utility function), and note that it is itself interesting, so maybe importance is the same as interestingness, because importance has to be itself important and (Schmidhuberian) interestingness satisfies this requirement by being itself...
But their proteins aren't necessarily making use of the extra computational power. And we can imagine that the physics of our universe allows for super powerful computers, but we can still obviously make efficient inferences about our universe.
You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)
I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).
(Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be...
The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts.
BTW, I've had numerous "wow" moments with philosophical insights, some of which made me spend years considering their implications. For example:
I expect that a correct solution to metaethics would pr...
The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.
If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?
used later on identity
Yes.
and decision theory
No, as far as I can tell.
I don't think the ugly duckling theorem (ie. the observation that any pair of elements from a finite set share exactly half of the powerset elements that they belong to) goes far towards proving that "our values determine our beliefs". Some offhand reasons why I think that: