I don't see any space station being self sustaining.
Mars could work and maybe the moon but a simple space station likely isn't worth the investment.
I don't see any space station being self sustaining.
Mars could work and maybe the moon but a simple space station likely isn't worth the investment.
I think stations can be self-sustaining, but they have to be much, much larger than the ISS.
But the bigger issue is, what functions would you even want in LEO that would help? I guess a beanstalk top would be really helpful, but it's hard to see anything that wipes out Earth being unable to take down the beanstalk too, unless it was a plague and the stalk had very impressive passive safety features.
Having other satellites, like GPS, and surveys, and so forth, could be really helpful, but that's not a space station.
It would make a good rendezvous point so you can have shuttles and ships, and the ships don't need to hang out all the time. It would make things cheaper and faster, though not make something possible that otherwise wouldn't be.
I guess a facility for checking out and repairing atmospheric entry vehicles would be very handy if there's any concern about that.
I have a heard time imaging a scenario where an ISS style space station would allow disaster recovery.
What about some other kind of station?
Why is asking for information getting downvoted here? Is the question so silly, so stupid, so unspeakable, as to be worth downvoting?
Even if you know divestment is useless, it sure would be nice to know. And, that wasn't all Clarity was asking for. Failed shareholder resolutions seem like they could possibly have some influence. Just how much - including if it's zero - is important information.
I'm trying to implement value change (see eg http://lesswrong.com/lw/jxa/proper_value_learning_through_indifference/ ). The change from u to -u is the easiest example of such a change. The ideal - which probably can't be implemented in a standard utility function - is that it is a u-maximiser that's indifferent to becoming a -u maximiser, who's then indifferent to further change, etc...
Well, then, let's change from the example being Monday + to Tuesday - to Wednesday and all later times +, with it unable to actually affect paperclip counts on Tuesday, let's consider if we just have a transition from u+ on Monday, Tuesday, Wednesday +, with u- on Thursday and later times, and it already has all the infrastructure it needs.
In this case, it will see that it can get a + score by having paperclips monday through wednesday, but that any that it still has on Thursday will count against it.
So, it will build paperclips as soon as it learns of this pattern. It will make them have a low melting point, and it will build a furnace†. On Wednesday evening at the stroke of midnight, it will dump its paperclips into the furnace. Because all along, from the very beginning, it will have wanted there to be paperclips M-W, and not after then. And on Thursday it will be happy that there were paperclips M-W, but glad that there aren't now.
I think that the trick is getting it to submit to changes to its utility function based on what we want at that time, without trying to game it. That's going to be much harder.
† and, if it suspects that there are paperclips out in the wild, it will begin building machines to hunt them down, and iff it's Thursday or later, destroy them. It will do this as soon as it learns that it will eventually be a paperclip minimizer for long enough that it is worth worrying about.
I don't see exactly how that would work - it can't build paper clips during the first week, so u(p(t))=0 during that period. Therefore it should behave exactly as if nothing special happened on Tuesday?
And my comment on turning itself off for Tuesday was more that the Monday AI wouldn't want it's infrastructure ruined by the Tuesday version, and would just turn itself off to prevent that.
I see - I thought you meant it would run for a week building infrastructure, and then be able to build paperclips on the first Monday you named.
I'm not sure what you WANT it to do, really. Do you want it to actually sabotage itself on Tuesday, or do you want it to keep on building infrastructure for later paperclip construction?
Under the system I built, it would do absolutely nothing different on Tuesday and continue to build infrastructure because it anticipates wanting more paperclips by the time it is able to build them at the end of the week. It wants low paperclips now, but it has no influence over paperclips now. It has influence over paperclips in the future, and it wants that there will be more of them when that time comes.
Let u be utility function linear in paperclips. Assume the agent has no ability to create or destroy paperclips for the first week; it needs to build up infrastructure and means first. We want it to be maximising u on Monday, -u on Tuesday, and u from Wednesday onwards. How can we accomplish this? And how can we accomplish it without the agent simply turning itself off for Tuesday?
u is a function of paperclips, which is in turn a function of time. So, u(p(t)) is the number of paperclips at time t.
U = integral[some reasonable bounds] {dt p(t) (t in first Tuesday?-1:1)}
So, the AI knows what it wants over all of the future, depending on time. When evaluating future plans for the future, it's able to take that change into account.
Like, it might spend both Monday and Tuesday just building infrastructure. In any case, turning off won't help on Tuesday because it will still know that there were paperclips then - not being on to observe them won't help it.
How would you do that? For a reward function, that's easy, but this is a utility function.
I really have no idea what the hitch is, here. In principle, a utility function can be over histories of the universe. Just care about different things in different parts of that history.
I really don't see how this would help, compared to just adding time dependence directly
One of the points that I was trying to make is that you can't apply anthropic reasoning like that. That is, you need to be comparative, to start with at least two models, then update on your anthropic data. As an analogy, I might be able to give you very good reasons for believing that theory A would explain a phenomena, but if theory B explains it better, then we should go with theory B. There are many cases where we can obscure this by talking exclusively about theory A.
So the question is not does 1) explain the situation well, but does 1) explain the situation better than 3), taking into account things such as prior probabilities.
Update: On second thought, multi-worlds is a pretty good answer when combined with the anthropic principle. I suppose that my argument then only shows that case 2) isn't a very good explanation.
I took it as too-obvious-to-mention that 2 & 3 explain the situation just fine, but have massive complexity penalties.
It makes a huge difference whether the dust speck choices add up or not. If they do, OrphanWilde's objection applies and the only path to survival is to be tortured.
If they don't, so each one of me gets one dust speck total, then dust specks for sure. All of the copies of me (whether there are one or 3^^^3 of us) are experiencing what amounts to a choice between individually being dust-specked or individually being tortured. We get what we ask for either way, and no one else is actually impacted by the choice.
There's no need to drag average utilitarianism in.