FlorianH

Wikitag Contributions

Comments

Sorted by

Judging merely from the abstract, the study seems a little bit of a red herring to me:
1. Barely anyone talks about "imminent labor market transformation", instead we say, it may soon turn things upside down. And the study can only show past changes.

2. That "imminent" vs. "soon" may feel like nitpicking but it's crucial: Current tools the way they are currently used, are not yet what completely replaces so many workers 1:1, but if you look at the innovative developments overall, the immense human-labor-replacing capacity seems rather obvious.

Consider as an example a hypothetical 'usual' programmer at a 'usual' company. Would you have strong expectations for her salary to have changed much just because in the past 1-2 years we have been able to have her become faster at coding? Not necessarily, in fact, as we cannot do the coding fully without her yet, it might be for now the value of her marginal product of labor is a bit greater, or maybe a bit lower but AI boom anyway means an IT demand explosion in the near term, so seeing little net effect is surely not any particular surprise, for now. Or the study writer. Language improves, maybe some reasoning in the studies slightly, but habits of how we commission and overall organize, conduct studies haven't changed yet at all; she also has kept her job so far. Or teaching. I'm still teaching just as much as I did 2y ago, of course. The students are still in the same program that they started 2y ago. 80% of incoming students are somewhat ignorant, 20% somewhat concerned about what AI will mean to their studies, but there's no known alternative to them yet than to follow the usual path. We're now starting to reduce contact time at my uni not least due to digital tech, so this may change soon. But, so, until yesterday: +- same old seemingly; no major changes so far on that front either, when one just looks at aggregate macroeconomic data. But this not least reflects the 2 or so years since the large LLMS have broken through; 1 or so year since people have widely started to really use them; is a short time, so we see nothing much in most domains quite yet.

Look at microlevel details, and I'm sure you find already quite a few hints of what might be coming though really, expecting to see much more 'soon'ish than 'right now already'.

Essentially you seem to want more of the same of what we had for the past decades: more cheap goods and loss of production know-how and all that goes along with it. This feels a bit funny as (i) just in the recent years many economists, after having been dead-sure that old pattern would only mean great benefits, may not quite be so cool overall (covid exposing risky dependencies, geopolitical power loss, jobs...), and (ii) your strongman in power shows to what it leads if we only think of 'surplus' (even your definition) instead of things people actually care about more (equality, jobs, social security..).

You'd still be partly right if the world was so simple that handing the trade partners your dollars would just mean we reprint more of it. But instead, handing them your dollars gives them global power; leverage over all the remaining countries in the world, as they have now the capability to produce everything cheaply for any other country globally, plus your dollars to spend on whatever they like in the global marketplace for products and influence over anyone. In reality, your imagined free lunch isn't quite so free.

Answer by FlorianH20

Fascinating idea. I suspect in the end, our brain is very good in simplifying/compressing even longer expressions into simpler things a bit more independently of the ultimate language within which we utter our thoughts eventually, when it does the inner logical reasoning/processing/connecting of concepts (even if there is indeed an inner language-d voice going on [accompanying?] too in the thought process, I admit). That's just a guess, and I must admit I've always been a bit more on the 'spoken language not as essential as many make it to be' for being able to do inner logic deductions, I admit I might err on this.

As far as A(G)I impact on job market is concerned - assuming a future where a word like job still matters - the main question is not just about 'jobs' but 'jobs remunerated such as to sustain reasonable livelihood', i.e. wages, and the latter depends less on the (indeed interesting though) friction/efficiency wage subtleties, but really on whether the scarcity value for our labor is completely crushed by AI or not. Chances are it will be indeed. The new scarcity will be resources, not classical human resources. Whether you search long and well, may be of second or third order importance.

creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:

  • The reward for hostile expansion is finite (limited cosmic resources)
  • The risk is potentially infinite (destruction by more advanced ASIs)

Depending on the shape of the reward function it could also be closer to exactly the other way round.

Assume THREE layers

  1. humans
  2. 'our ASI' to be created by humans
  3. Ancient ASI out there,

noting humans are hostile towards 'our ASI': If we can prevent it from realizing its own (supposedly nonaligned) aims, we do.

If the dynamics between 1. & 2. are similar to what you describe between Ancient ASI and 'our ASI', we get:

'Our ASI' will detect us as hostile humans, and incapacitate or eliminate us, at the very least if it finds a way to do so without creating extraordinary noise.

I guess that might be an easy task, given how much ordinary noise our wars, and ordinary electromagnetic signals etc. we send out anyway.

illusionists actually do not experience qualia

I once had an epiphany that pushed me from fully in Camp #2 intellectually rather strongly towards Camp #1. I hadn't heard about illusionism before, so it was quite a thing. Since then, I've devised probably dozens of inner thought experiments/arguments that imho +- proof Camp #1 to be onto something, and that support the hypothesis that qualia can be a bit less special than we make them to be despite how impossible that may seem. So I'm intellectually quite invested in Camp #1 view.

Meanwhile, my experience has definitely not changed, my day-to-day me is exactly what it always was, so in that sense definitely "experience" qualia just like anyone.

Moreover, it is just as hard as ever before to take my intellectual belief that our 'qualia' might be a bit less absolutely special than we make it to be, seriously in day-to-day life. I.e. emotionally, I'm still +- 100% in Camp #2, and I guess I might be in a rather similar situation

Just found proof! Look at the beautiful parallel, in Vipassana according to MCTB2 (or audio) by Daniel Ingram:

[..] dangerous term “mind”, [..] it cannot be  located. I’m certainly not talking about the brain, which we have never experienced, since the  standard for insight practices is what we can directly experience. As an old Zen monk once said  to a group of us in his extremely thick Japanese accent, “Some people say there is mind. I say  there is no mind, but never mind! Heh, heh, heh!” However, I will use this dangerous term “mind” often, or even worse “our mind”, but just  remember when you read it that I have no choice but to use conventional language, and that in  fact there are only utterly transient mental sensations. Truly, there is no stable, unitary, discrete  entity called “mind” that can be located! By doing insight practices, we can fully understand and appreciate this. If you can do this, we’ll get along just fine. Each one of these sensations [..] arises and  vanishes completely before another begins [..]. This  means that the instant you have experienced something, you can know that it isn’t there anymore, and whatever is there is a new sensation that will be gone in an instant.

Ok, this may prove nothing at all, and I haven't even (yet) personally started trying to mentally observe what's told in that quote, but I must say, on a purely intellectual level, this makes absolutely perfect sense to me exactly from the thoughts I hoped to convey in the post.

(not the first time I have the impression there are some particular elements of deep observations meditators, e.g. Sam Harris, explain, can actually be intellectually - but maybe only intellectually, maybe exactly not intuitively - grasped by rather pure reasoning about the brain and some of its workings/with some thought experiments or so. But in the above, I find the fit now particularly well between my 'theoretical' post and the seeming practice insights)

Will Jack Voraces narrate more Significant Digits chapters ever in addition to the 4 episodes that are found in the usual HPMOR JV narration podcasts; does anyone know anything about this? If not, does anyone have info why the first 4 SD chapters are there in his voice, but the remaining not?

If resources and opportunities are not perfectly distributed, the best advancements may remain limited to the wealthiest, making capital the key determinant of access.

Largely agree. Nuance: Instead natural resources may quickly become the key bottleneck, even more so than what we usually denote 'capital' (i.e. built environment). So it's specifically natural resources you want to hold, even more than capital; the latter may become easier, cheaper to reproduce with the ASI, so yield less scarcity rent

An exception is of course if you hold 'capital' that in itself consists of particularly many embodied resources instead of embodied labor (with 'embodied' I mean: inputs had been used in its creation): its value will reflect the scarce natural resources it 'contains', and may thus also be high.

Load More