Assume THREE layers
noting humans are hostile towards 'our ASI': If we can prevent it from realizing its own (supposedly nonaligned) aims, we do.
If the dynamics between 1. & 2. are similar to what you describe between Ancient ASI and 'our ASI', we get:
'Our ASI' will detect us as hostile humans, and incapacitate or eliminate us, at the very least if it finds a way to do so without creating extraordinary noise.
I guess that might be an easy task, given how much ordinary noise our wars, and ordinary electromagnetic signals etc. we send out anyway.
illusionists actually do not experience qualia
I once had an epiphany that pushed me from fully in Camp #2 intellectually rather strongly towards Camp #1. I hadn't heard about illusionism before, so it was quite a thing. Since then, I've devised probably dozens of inner thought experiments/arguments that imho +- proof Camp #1 to be onto something, and that support the hypothesis that qualia can be a bit less special than we make them to be despite how impossible that may seem. So I'm intellectually quite invested in Camp #1 view.
Meanwhile, my experience has definitely not changed, my day-to-day me is exactly what it always was, so in that sense definitely "experience" qualia just like anyone.
Moreover, it is just as hard as ever before to take my intellectual belief that our 'qualia' might be a bit less absolutely special than we make it to be, seriously in day-to-day life. I.e. emotionally, I'm still +- 100% in Camp #2, and I guess I might be in a rather similar situation
Just found proof! Look at the beautiful parallel, in Vipassana according to MCTB2 (or audio) by Daniel Ingram:
[..] dangerous term “mind”, [..] it cannot be located. I’m certainly not talking about the brain, which we have never experienced, since the standard for insight practices is what we can directly experience. As an old Zen monk once said to a group of us in his extremely thick Japanese accent, “Some people say there is mind. I say there is no mind, but never mind! Heh, heh, heh!” However, I will use this dangerous term “mind” often, or even worse “our mind”, but just remember when you read it that I have no choice but to use conventional language, and that in fact there are only utterly transient mental sensations. Truly, there is no stable, unitary, discrete entity called “mind” that can be located! By doing insight practices, we can fully understand and appreciate this. If you can do this, we’ll get along just fine. Each one of these sensations [..] arises and vanishes completely before another begins [..]. This means that the instant you have experienced something, you can know that it isn’t there anymore, and whatever is there is a new sensation that will be gone in an instant.
Ok, this may prove nothing at all, and I haven't even (yet) personally started trying to mentally observe what's told in that quote, but I must say, on a purely intellectual level, this makes absolutely perfect sense to me exactly from the thoughts I hoped to convey in the post.
(not the first time I have the impression there are some particular elements of deep observations meditators, e.g. Sam Harris, explain, can actually be intellectually - but maybe only intellectually, maybe exactly not intuitively - grasped by rather pure reasoning about the brain and some of its workings/with some thought experiments or so. But in the above, I find the fit now particularly well between my 'theoretical' post and the seeming practice insights)
If resources and opportunities are not perfectly distributed, the best advancements may remain limited to the wealthiest, making capital the key determinant of access.
Largely agree. Nuance: Instead natural resources may quickly become the key bottleneck, even more so than what we usually denote 'capital' (i.e. built environment). So it's specifically natural resources you want to hold, even more than capital; the latter may become easier, cheaper to reproduce with the ASI, so yield less scarcity rent
An exception is of course if you hold 'capital' that in itself consists of particularly many embodied resources instead of embodied labor (with 'embodied' I mean: inputs had been used in its creation): its value will reflect the scarce natural resources it 'contains', and may thus also be high.
If you ever have to go to the hospital for any reason, suit up, or at least look good.
[Rant alert; personal anecdotes aiming to emphasize the underlying issue:] Feeling less crazy when reading I'm not an outlier with my wearing suit when going to a doc. What has brought me: Got pain in the throat but nothing can be seen (maybe as the red throat skin kind of by definition doesn't reveal red itchy skin or so?) = you're psychosomatic. Got weird twitches after eating sugar none can explain = kick you out yelling 'go eat ice cream, you're not diabetic or anything' (literally!) - until next time you bring a video of the twitches, and until you eat chocolate before an appointment to be sure you can show them the weird twitches live. Try to understand at least a tiny bit about the hernia OP they're about to do on you (incl. something about probabilities)? Get treated with utter disdain.
In my country medicine students were admitted based on how many latin words they memorize or something, instead of things correlated with IQ, idk whether things are similar in other countries and may help explain the state of affairs.
I presume you wrote this with not least a phenomenally unconscious AGI in mind. This brings me to the following two separate but somewhat related thoughts:
A. I wonder what you [or any reader of this comment]: What would you conclude or do if you (i) yourself did not have any feeling of consciousness[1], and then (ii) stumbled upon a robot/computer writing the above, while (iii) you also know - or strongly assume - whatever the computer writes can be perfectly explained (also) based merely by the logically connected electron flows in their processor/'brain'?
B. I could imagine - a bit speculation:
I'm aware of the weirdness of that statement; 'feeling not conscious' as a feeling itself implies feeling - or so. I reckon you still understand what I mean: Imagine yourself as a bot with no feelings etc.
I upvote for bringing the useful terminology for that case to the attention that I wasn't aware of.
Then, too much "true/false", too much "should" in what is suggested imho.
In reality, if I, say, choose not to drink the potion, I might still be quite utilitarian in usual decisions, it's just that I don't have the guts or so, or at this very moment I simply have a bit too little empathy with the trillion years of happiness for my future self, so it doesn't match up with my dreading the almost sure death. All this without implying that I really think we ought to discount these trillion years. I just am an imperfect altruist with my future self; I have fear of dying even if it's an imminent death, etc. So it's just a basic preference to reject it, not a grand non-utilitarian theory implied by it. I might in fact even prescribe that potion to others in some situations, but still not like to drink it myself.
So, I think it does NOT follow that I'd have to believe "what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now", at least not just from rejecting this particular potion.
Agree. I find it powerful especially about popular memes/news/research results. With only a bit of oversimplification: Give me anything that sounds like it is a sexy story to tell independently of underlying details, and I sadly have to downrate the information value of my ears' hearing it, to nearly 0: I know in our large world, it'd be told likely enough independently of whether it has any reliable origin or not.
Depending on the shape of the reward function it could also be closer to exactly the other way round.