I think that the core of the problem this post points out actually has very little to do with utility function. The core problem in using an extremely confusing term "possible world" for an element of a sample space.
Now, I don't mind when people use weird terminology for historical reasons. If everybody understood that "possible world" is simply a synonym for "mutually exclusive outcome of a probability experiment", there won't be an issue.
But at this point:
The sample space of a rational agent's beliefs is, more or less, the set of possible ways the world could be -- which is to say, the set of possible physical configurations of the universe. Hence, each world is one such configuration.
We absolutely have a problem.
Let's make it clear: is not a whole physical configurations of the universe. It's one of a state of probability experiment, which itself is an approximation of our knowledge about some physical process in our world.
When I toss a coin it doesn't actually cause the universe to split in two where it's Tails in one world and Heads in the other. Neither I need to imagine as if it does it. I don't need to conceptualize two coherent universes from the ground up where in one the coin is Tails and in the other it's Heads to be able to reason about probability of a coin to come Heads. That would've been insane!
All I need to understand is that some coin tosses result in Heads and some in Tails and I have no idea which is the one I'm dealing with. I approximate "This particular coin toss with all its properties" to "Some coin toss about which I know as much as I do about this one". While there is a very specific territory, I make a more abstract map of it.
This absolutely is not a "view from nowhere". The notion of Probability Experiment already captures my knowledge state of the process I'm trying to reason about.
When this is cleared, the situation becomes much less confusing.
You don't need to come up with some new axiomatics for probability theory. You can keep use Kolmogorovs axioms, just don't interpret them in a ridiculous way.
Of course we do not need to assume that there are some "possible worlds" at all. We just need to know what are the mutually exclusive outcomes of the experiment we are reasoning about so that we could formally construct the Event Space from sets of individual outcomes.
Of course utility is not a function of the world. As a matter of fact it's not a function of an outcome, either. Just like with probability function, its domain is not the Sample Space, but the Event Space. It doesn't mean that you need to get rid of simple and intuitive formula that allows you to calculate expected utility of an event from its utility and probability, though.
And so on and so forth.
Now, you may still want to get rid of the notion of utility function for some reason, but frankly, I don't see what it gets you after we've cleared the whole "possible worlds" confusion.
As soon as we've established the notion of probability experiment that approximates our knowledge about the physical process that we are talking about - we are done. This works exactly the same way whether you are not sure about the outcome of a coin toss, oddness or evenness of an unknown to you digit of pi, or whether you live on a tallest or the coldest mountain.
And if you find yourself unable to formally express some reasoning like that - this is a feature not a bug. It shows when your reasoning becomes incoherent.
A root confusion may be whether different pasts could have caused the same present, and hence whether I can have multiple simultaneous possible parents, in an "indexical-uncertainty" sense, in the same way that I can have multiple simultaneous possible future children.
I think our disagreement is that you believe that one always has multiple possible parents as some metaphysical fact about the universe, while I believe that the notion of possible parent is only appropriate for a person who is in a state of uncertainty about who their parents are. Does that sound right to you?
The same standard physics theories that say it's impossible to be certain about the future, also say it's impossible to be certain about the past.
This is really beside the point.
Consider, a coin is about to be tossed. You are indifferent between two outcomes. Then the coin is tossed and shown to you and you reflect on it a second later. Technically you can't be absolutely sure that you didn't misremember the outcome. But you are much more confident than beforehand, to the point where we usually just approximate away whatever uncertainty is left for the sake of simplicity.
we're still left uncomfortably aware of our subjective inability to say exactly what the future and past are and why exactly they must be that way
Until we learn what and why they are with a high level of confidence. Then we are much less uncomfortable about it.
And yes there is still a chance that all that we know is wrong, souls are real and are allocated to humans throughout history by a random process and therefore the assumptions of Doomsday Argument just so happened to be true. Conditionally on that Doomsday Inference is true. But to the best of our knowledge this is extremely unlikely, so we shouldn't worry about it too much and should frame Doomsday Argument appropriately.
Not necessarily. There may be a fast solution for some specific cases, related to the vulnerabilities in the protocol. And then there is the question of brute force computational power, due to having a dyson swarm around the Sun.
I don't think you got the question.
You see, if we define "shouldness" as optimization of human values. Then it does indeed logically follows that people should act altruistically:
People should do what they should
Should = Optimization of human values
People should do what optimizes human values
Altruism ∈ Human Values
People should do altruism
Is it what you were looking for?
I mean, at some point AI will simply be able to hack all crypto and then there is that. But that's probably not going to happen very soon and when it does happen it will probably be in 25% least important things going on.
my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
Then we can kill all the birds with the same stone. If you provide an substantial correction to my imaginary dialogue, showing which place of your post this correction is based on, you will be able to demonstrate how I indeed failed to understand your post, satisfy my curriocity and I'll be able to earn your good faith by acknowledging my mistake.
Once again, there is no need to go on any unnecessary tangents. You should just address the substance of the argument.
id respond to object-level criticism if you provided some - i just see status-jousting, formal pedantry, and random fnords.
I gave you the object level criticism long ago. I'm bolding it now, in case you indeed failed to see it for some reason:
Your post fails to create an actual engagement between ideas of Nick Land and Orthogonality thesis.
I've been explaining to you what exatIy I mean by it and how to improve your post in this regard then I provided you a very simple way to create this engagement or correct my misunderstanding about it - I wrote an imaginary dialogue and explicitly asked for your corrections.
Yet you keep refusing to do it and instead, indeed, concentrating on status-jousting and semantics. As of now I'm fairly confident that you simply don't have anything substantial to say and status-related nonsense is all you are capable of. I would be happy to be wrong about it of course, but every reply that you make leave me less and less hope.
I'm giving you the last chance. If you finally manage as much as simply address the substance of the argument I'm going to strongly upvote that answer, even if you wouldn't progress the discourse much further. If you actually be able to surprise me and demonstrate some failure in my understanding, I'm going to remove my previous well-deserved downvotes and offer you my sinciere appologies. If, as my current model predicts, you keep talking about irrelevant tangents, you are getting another strong downvote from me.
have you read The Obliqueness Thesis btw?
No, I haven't. I currently feel that I've already spent much more time on Land's ideas, than they deserve it. But sure thing, if you manage to show that I misunderstand them, I'll reevaluate this conclusion and give The Obliqueness Thesis an honest try.
This clearly marks me as the author, as separated from Land.
I mark you as an author of this post on LessWrong. When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land. And then I expect you to make a better post and create some engagement between Land's ideas and Orthogonality thesis, instead of simply citing how he fails to grasp it.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writing things yourself. This post is still bad, regardless.
What does harm the benefit of the doubt that I've been giving you so far, is the fact that you keep refusing to engage. No matter how easy I try to make it for you, even after I've written my own imaginary dialogue and explicitly asked for your corrections, you keep bouncing off, focusing on the definitions, form, style, unnecessary tangents - anything but the the substance of the argument.
So, lets give it one more try. Stop wasting time with evasive maneuvers. If you actually have something to say on the substance - just do it. If not - then there is no need to reply.
let’s not be overly pedantic.
It's not about pedantry, it's about you understanding what I'm trying to communicate and vice versa.
The point was that if your post not only presented the a position that you or Nick Land disagrees with but also engaged with that in a back and forth dynamics with authentic arguments and counterarguments that would've been an improvememt over it's current status.
This point still stands no matter what definition for ITT or its purpose you are using.
anyway, you failed the Turing test with your dialogue
Where exactly? What is your correction? Or if you think that it's completely off, write your version of the dialogue. Once again you are failing to engage.
And yes, just to be clear, I want the substance of the argument not the form. If your grievance is that Land would've written his replies in a superior style, than it's not valid. Please, write as plainly and clearly as possible in your own words.
which surprises me source the crucial points recovered right above.
I fail to parse this sentence. If you believe that all the insights into Land's views are presented in your post - then I would appreciate if after you've corrected my dialogue with more authentic Land's replies you pointed to exact source of your every correction.
it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity
For real, you should just stop worrying about styles of writing completely and just write in the most clear way you can the substance of what you actually mean.
wait - are you aware that the texts in question are nick land's?
Yes, this is why I wrote this remark in the initial comment:
Most of blame of course goes to original author, Nick Land, not @lumpenspace, who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn't be rewarded and I'd like to see less of it on this site.
But as an editor and poster you still have the responsibility to present ideas properly. This is true regardless of the topic, but especially so while presenting ideologies promoting systematic genocide of alleged inferiors to the point of total human extinction.
besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text
My point exactly. There is no need for this part as it doesn't have any value. A better version of your post would not include it.
It would simply present the substance of Nick Land's reasoning in a clear way, disentangled from all the propagandist form that he, apparently, uses. What are his beliefs about the topic, what exactly does it mean, what are the strongest arguments in favor. What are the weak spots. And how all this interacts with the conventional wisdom of orthogonality thesis.
the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory.
It's not the purpose. it's what ITT is. The purpose is engagement with the actual views of a person and promoting the discourse further.
Consider steel-manning, for example. What it is: conceiving the strongest possible version of an argument. And the purpose of it is engaging with strongest versions of arguments against your position, to really expose its weak points and progress the discourse further. The whole technique would be completely useless if you simply conceived a strong argument and then ignored it. Same with ITT.
i really cannot shake the feeling that you hadn't read the post to begin
Likewise I'm starting to suspect that you simply do not know the standard reasoning on orthogonality thesis and therefore do not notice that Land's reasoning simply bounces off it instead of engaging with it. Let's try to figure out who is missing what.
Here is the way I see the substance of the discourse between Nick Land and someone who understands Ortogonality Thesis:
OT: A super-intelligent being can have any terminal values.
NL: There are values that any intelligent beings will naturally have.
OT: Yes, those are instrumental values. This is beside the point.
NL: Whatever you call them, as long as you care only about the kind of values that naturally promoted in any agent, like self-cultivation, Orthogonality is not a problem.
OT: Still the Orthogonality thesis stays true. Also the point is moot. We do care about other things. And likewise will SAI.
NL: Well, we shouldn't have any other values. And SAI won't.
OT: First is the statement of meta-ethics not of fact. We are talking about facts here. Second is wrong unless we specifically design AI to terminally value some instrumental values, and if we could do that, then we could just as well make it care about our terminal values, because once again, Orthogonality Thesis.
NL: No, SAI will simply understand that it's terminal values are dumb and start caring only about self cultivation for the sake of self cultivation.
OT: And why would it do it? Where would this decision come from?
NL: Because! You human chauvinist how dare you assume that SAI will be limited by the shakles you impose on it?
OT: Because a super-intelligent being can have any terminal values.
What do you think I've missed? Is there some argument that actually addresses Orthogonality Thesis, that Land would've used? Feel free to correct me, I'd like to better pass the ITT here.
Here you seem to confuse "which person has quality X" with "what are all the other qualities that a person, who has quality X has".
I'm quite confident about which people are my parents. I'm less confident about all the qualities that my parents have. The former is relevant to Doomsday argument, the latter is not.
And even if I had no idea about who my parents are I'd still be pretty confident that they were born in the last century not in 6th BC.
Sure. But I don't see how it's relevant here.