This is pretty close to the dust theory of Greg Egan's Permutation City and also similar in most ways to Tegmark's universe ensemble.
They are the only options available in the problem. It is true that this means that some optimality and convergence results in decision theory are not available.
It's not exotic at all. It's just a compatibilist interpretation of the term "free will", which form a pretty major class of positions on the subject.
That doesn't address the question at all. That just says if the system is well modelled as having a utility function, then ... etc. Why should we have such high credence that the premise is true?
I expect that (1) is theoretically true, but false in practice in much the same way that "we can train an AI without any reference to any sort of misalignment in the training material" is false in practice. A superintelligent thought-experiment being can probably do either, but we probably can't.
In that line, I expect that (3) is not true. Bits of true information leak into fabricated structures of information in all sorts of ways, and definitively excluding them from something that may be smarter than you are is likely to cost a lot more than presenting true information (in time, effort, or literal money).
Consider that the AI may ask for evidence in a form that you cannot easily fabricate. E.g. it may have internal knowledge from training or previous experience about how some given external person communicates, and ask them to broker the deal. How sure are you that you can fabricate data that matches the AI's model? If you are very sure, is that belief actually true? How much will it cost you if the AI detects that you are lying, and secretly messes up your tasks? If you have to run many instances in parallel and/or roll back and retry many times with different training and experience to get one that doesn't do anything like that, how much will that cost you in time and money? If you do get one that doesn't ask such things, is it also less likely to perform as you wish?
These costs have to be weighed against the cost of actually going ahead with the deal.
(2) isn't even really a separate premise, it's a restatement of (1).
(4) is pretty obviously false. You can't just consider the AI's behaviour, you also have to consider the behaviour of other actors in the system including future AIs (possibly even this one!) that may find out about the deception or lack thereof.
I agree that even with free launch and no maintenance costs, you still don't get 50x. But it's closer than it looks.
On Earth, to get reliable self-contained solar power we need batteries that cost a lot more than the solar panels. A steady 1 kW load needs on the order of 15 kW peak-rated solar panels plus around 50 kW-hr battery capacity. Even that doesn't get 99% uptime, but enough for many purposes and it is probably adequate when connected to a continent-spanning grid with other power sources.
The same load in orbit would need about 1.5 kW peak rated panels and less than 1 kW-hr of battery capacity for uptime dependent only upon reliability of equipment. The equipment does need to be designed for space, but doesn't need to be sturdy against wind, rain, and hailstones. It would have increased cooling costs, but transporting heat (e.g. via coolant loop) into a radiator edge-on to the Sun will be highly effective (on the order of 1000 W/m^2 for a radiator averaging 35 C).
I don't think either of these possibilities are really justified. We don't necessarily know what capabilities are required to be an existential threat, and probably don't even have a suitable taxonomy for classifying them that maps to real-world risk. What looks to us like conjunctional requirements may be more disjunctional than we think, or vice versa.
"Jagged" capabilities relative to human are bad if the capability requirements are more disjunctional than we think, since we'll be lulled by low assessments in some areas that we think of as critical but actually aren't.
They're good if high risk requires more conjunctional capabilities than we think, especially if the AIs are jaggedly bad in actually critical areas that we don't even know that we should be measuring.
Did you look only at changes in median prices (capital gain), or did you include a rental income stream as well? You would need to make allowance for maintenance and various fees and taxes out of that income stream, but it usually still exceeds the capital gain.
In addition to the much greater availability of retail loans, there are often substantial tax advantages available compared with other investments. For example in Australia: the ability to deduct interest payments for investment properties as an expense offsetting all income (not just income derived from the property) to determine taxable income. So in addition to the loans being easier to get and having lower interest rates, they're effectively lowered further by the investor's marginal tax rate.
There is also a substantial discount (50%) on capital gains tax for holding the relevant assets for more than a year, which applies to rental property more naturally than many other leveraged investments.
Most of these social structures are, in the aggregate, substantially stupider than individual humans in many important ways.