I agree tech beyond human comprehension is possible. I’m just giving an intuition as to why a lot of radically powerful tech likely still lies within human comprehension. 500 [1] years of progress is likely to still be within comprehension, so is 50 years or 5 years.
The most complex tech that exists in the universe is arguably human brains themselves and we could probably understand a good fraction of their working too, if someone explained it.
Important point here being the AI has to want to explain it in simple terms to us.
If you get a 16th century human to visit a nuclear facility for a day that’s not enough information for them to figure out what it does or how it works. You need to provide them textbooks that break down each of the important concepts.
[1] society in 2000 is explainable to society in 1500 but society in 2500 may or may not be explainable to society in 2000 because acceleration
Why does this matter? To quote a Yudkowsky-ish example, maybe you can take a 16-th century human (before Newtonian physics was invented, after guns were invented) and explain to him how a nuclear bomb works. This doesn't matter for predicting the outcome of a hypothetical war between 16th century Britain and 21st century USA.
ASI inventions can be big surprises and yet be things that you could understand if someone taught you.
We could probably understand how a von Neumann probe or an anti-aging cure worked too, if someone taught us.
Suppose you are trying to figure out a function f(x,y,z | a,b,c) where x, y ,z are all scalar values and a, b, c are all constants.
If you knew a few zeroes of this function, you could figure out good approximations of this function. Let's say you knew
U(x,y, a=0) = x
U(x,y, a=1) = x
U(x,y, a=2) = y
U(x,y, a=3) = y
You could now guess U(x,y) = x if a<1.5, y if a>1.5
You will not be able to get a good approximation if you did not know enough zeroes.
This is a comment about morality. x, y, z are agent's multiple possibly-conflicting values and a, b, c are info about environment of agent. You lack data about how your own mind will react to hypothetical situations you have not faced. At best you can extrapolate from historical data around minds of other people that are different from yours. Bigger and more trustworthy dataset will help solve this.
Update: I read your examples and I honestly don’t see how any of these 3 people would be better off by their own idea of what better off means, if they were less open or less truthful.
P.S. discussing anonymously is easier if you’re not confident you can handle the social repercussions of discussing it under your real name. I agree that morality is social dark matter and it’s difficult to argue in favour of positions that are pro-violence pro-deception etc under your real name.
Update: I'll be more specific. There's a power buys you distance from the crime phenomena going on if you're okay with using Google maps data acquired on about their restaurant takeout orders, but not okay asking the restaurant employee yourself or getting yourself hired at the restaurant.
If a new AI model comes out that's better than the previous one and it doesn't shorten your timelines, that likely means either your current or your previous timelines were inaccurate.