Your point (incl. your answer to slientbob) seems to be based on rather fundamental principles; implicitly you'd seem to suggest - I dare interpret a bit freely and wonder what you say:
If you upended your skills, so the AI you build becomes.... essentially a hamun - defined as being basically similar as a human but artificially built by you as biological AI instead of via usual procreation - one could end up tempted to say the same thing: Actually, asking about their phenomenal consciousness is the wrong thing.
Taking your "Human moral judgement seem easily explained as an evolutionary adaptation for cooperation and conflict resolution, and very poorly explained by perception of objective facts." from your answer to silentbob, I have the impression you'd have to say: Yep, no particular certainty about having to take hamun's as being moral patients.
Boils down to some strong sort of illusionism? Do you have, according to your premises, a way to 'save' our conviction of moral value of humans? Or might you actually try to?
Maybe I'm over-interpreting all this, but would be keen to see how you see it.
Judging merely from the abstract, the study seems a little bit of a red herring to me:
1. Barely anyone talks about "imminent labor market transformation", instead we say, it may soon turn things upside down. And the study can only show past changes.
2. That "imminent" vs. "soon" may feel like nitpicking but it's crucial: Current tools the way they are currently used, are not yet what completely replaces so many workers 1:1, but if you look at the innovative developments overall, the immense human-labor-replacing capacity seems rather obvious.
Consider as an example a hypothetical 'usual' programmer at a 'usual' company. Would you have strong expectations for her salary to have changed much just because in the past 1-2 years we have been able to have her become faster at coding? Not necessarily, in fact, as we cannot do the coding fully without her yet, it might be for now the value of her marginal product of labor is a bit greater, or maybe a bit lower but AI boom anyway means an IT demand explosion in the near term, so seeing little net effect is surely not any particular surprise, for now. Or the study writer. Language improves, maybe some reasoning in the studies slightly, but habits of how we commission and overall organize, conduct studies haven't changed yet at all; she also has kept her job so far. Or teaching. I'm still teaching just as much as I did 2y ago, of course. The students are still in the same program that they started 2y ago. 80% of incoming students are somewhat ignorant, 20% somewhat concerned about what AI will mean to their studies, but there's no known alternative to them yet than to follow the usual path. We're now starting to reduce contact time at my uni not least due to digital tech, so this may change soon. But, so, until yesterday: +- same old seemingly; no major changes so far on that front either, when one just looks at aggregate macroeconomic data. But this not least reflects the 2 or so years since the large LLMS have broken through; 1 or so year since people have widely started to really use them; is a short time, so we see nothing much in most domains quite yet.
Look at microlevel details, and I'm sure you find already quite a few hints of what might be coming though really, expecting to see much more 'soon'ish than 'right now already'.
Essentially you seem to want more of the same of what we had for the past decades: more cheap goods and loss of production know-how and all that goes along with it. This feels a bit funny as (i) just in the recent years many economists, after having been dead-sure that old pattern would only mean great benefits, may not quite be so cool overall (covid exposing risky dependencies, geopolitical power loss, jobs...), and (ii) your strongman in power shows to what it leads if we only think of 'surplus' (even your definition) instead of things people actually care about more (equality, jobs, social security..).
You'd still be partly right if the world was so simple that handing the trade partners your dollars would just mean we reprint more of it. But instead, handing them your dollars gives them global power; leverage over all the remaining countries in the world, as they have now the capability to produce everything cheaply for any other country globally, plus your dollars to spend on whatever they like in the global marketplace for products and influence over anyone. In reality, your imagined free lunch isn't quite so free.
Fascinating idea. I suspect in the end, our brain is very good in simplifying/compressing even longer expressions into simpler things a bit more independently of the ultimate language within which we utter our thoughts eventually, when it does the inner logical reasoning/processing/connecting of concepts (even if there is indeed an inner language-d voice going on [accompanying?] too in the thought process, I admit). That's just a guess, and I must admit I've always been a bit more on the 'spoken language not as essential as many make it to be' for being able to do inner logic deductions, I admit I might err on this.
As far as A(G)I impact on job market is concerned - assuming a future where a word like job still matters - the main question is not just about 'jobs' but 'jobs remunerated such as to sustain reasonable livelihood', i.e. wages, and the latter depends less on the (indeed interesting though) friction/efficiency wage subtleties, but really on whether the scarcity value for our labor is completely crushed by AI or not. Chances are it will be indeed. The new scarcity will be resources, not classical human resources. Whether you search long and well, may be of second or third order importance.
creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:
- The reward for hostile expansion is finite (limited cosmic resources)
- The risk is potentially infinite (destruction by more advanced ASIs)
Depending on the shape of the reward function it could also be closer to exactly the other way round.
Assume THREE layers
noting humans are hostile towards 'our ASI': If we can prevent it from realizing its own (supposedly nonaligned) aims, we do.
If the dynamics between 1. & 2. are similar to what you describe between Ancient ASI and 'our ASI', we get:
'Our ASI' will detect us as hostile humans, and incapacitate or eliminate us, at the very least if it finds a way to do so without creating extraordinary noise.
I guess that might be an easy task, given how much ordinary noise our wars, and ordinary electromagnetic signals etc. we send out anyway.
illusionists actually do not experience qualia
I once had an epiphany that pushed me from fully in Camp #2 intellectually rather strongly towards Camp #1. I hadn't heard about illusionism before, so it was quite a thing. Since then, I've devised probably dozens of inner thought experiments/arguments that imho +- proof Camp #1 to be onto something, and that support the hypothesis that qualia can be a bit less special than we make them to be despite how impossible that may seem. So I'm intellectually quite invested in Camp #1 view.
Meanwhile, my experience has definitely not changed, my day-to-day me is exactly what it always was, so in that sense definitely "experience" qualia just like anyone.
Moreover, it is just as hard as ever before to take my intellectual belief that our 'qualia' might be a bit less absolutely special than we make it to be, seriously in day-to-day life. I.e. emotionally, I'm still +- 100% in Camp #2, and I guess I might be in a rather similar situation
Assuming you mean the evolved strategy is to separate out a limited amount of information into the conscious space, having that part control what we communicate externally, so our dirty secrets are more safely hidden away within our original, unconscious space.
Essentially: Let outwards appearance be nicely encapsulated away, with a hefty dose of self-serving bias ad what not, give it only what we deem it useful to know.
Intriguing!! Feels like a good fit with us feeling and appearing so supposedly good always and everywhere, while in reality, we're rather deeply nasty as humans in so many ways.