Wiki Contributions

Comments

From what you write, Acemoglu's suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:

  1. If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country's egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
  2. If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
  3. If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.

 

Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.

Commenting on the basis of lessons from some experience doing UBI analysis for Switzerland/Europe:

The current systems has various costs (time and money, but maybe more importantly, opportunities wasted by perverse incentives) associated with proving that you are eligible for some benefit.

On the one hand, yes, and its a key reason why NIT/UBI systems are often popular on the right; even Milton Friedman already advocated for a NIT. That said, there are also discussions that suggest the poverty trap - i.e. overwhelmingly strong labor disincentives for poor, from outrageously high effective marginal tax rates from benefits fade-out/tax kicking-in - may be partly overrated, so smoothing the earned-to-net income function may not help as much as some may hope. And, what tends to be forgotten, is that people with special needs may not be able to live purely from a UBI, so not all current social security benefit mechanisms can usually be replaced by a standard UBI.

On the other hand, once you have a conditional welfare system that does not have crazily strong/large poverty traps, labor incentives might overall still be mostly stronger than under a UBI (assumed sufficiently generous to allow a reasonable life from it), once you also take into account the high marginal tax rates required to finance that UBI. This seems to hold in even relatively rich countries (we used to calculate it for Switzerland).

Of course, with AI Joblessness all this might change anyway, in line with the underlying topic of the post here.

 

Plus you need to pay the people who verify all this evidence.

This tends to be overrated; when you look at the stats, this staff cost is really small compared to the total traditional social security or the UBI costs (we looked at #s in Switzerland but I can only imagine it's exactly similar orders of magnitudes in other developed countries).

I see there might be limits to what is possible.

On the other hand, I have the impression often the limits to what the students can learn (in economics) come more from us teaching in absurdly simplified cases too remote from reality and from what's plausible so that the entire thing we teach remains a purely abstract empty analytical beast. While I only guess even young ones are capable of understanding the more subtle mechanisms - in their few individual steps often not really complicated! - if only we taught them with enough empathy for the students & for the reality we're trying to model.

As you write, with as little math as absolutely necessary.

Would really really love to replace curricula by what you describe, kudos for proposing a reasonably simple yet consistent high-level plan that at least to my mostly uneducated eyes seems rather ideal!

Maybe unnecessary detail here but fwiw, in economics in the Core Civilizational Requirements,

an understanding of supply and demand, specialization and trade, and how capitalism works

I'd try to make sure to provoke them with enough not-so-standard market cases to allow them develop intuitions of where what intervention might be required/justified for which reasons (or from which points of view) and where not. I teach that subject, and deplore how our teaching tends to remain on the surface of things without opportunity to really sharpen students' minds w.r.t. the slightly more intricate econ policy questions where too shallow a demand-supply thinking just isn't much better than no econ at all.

Assuming you're the first to explicitly point out that lemon market type of feature of 'random social interaction', kudos, I think it's a great way to express certain extremely common dynamics.

Anecdote from my country, where people ride trains all the time, fitting your description, although it takes a weird kind of extra 'excuse' in this case all the time: It would often feel weird to randomly talk to your seat neighbor, but ANY slightest excuse (sudden bump in the ride; info speaker malfunction; grumpy ticket collector; one weird word from a random person in the wagon, ... any smallest thing) will an extremely frequently make the silent start conversation, and then easily for hours if the ride lasts that long. And I think some sort of social lemon market dynamics may help explain it indeed.

FlorianH110

Funny is jot the only adjective this anecdote deserves. Thanks for sharing this great wisdom/reminder!

I would not search for smart ways to detect it. Instead look at it from the outside - and there I don't see why we should have large hope for it to be detectable:

Imagine you create your simulation. Imagine you are much more powerful than you are, to make the simulation as complex as you want. Imagine in your coolest run, your little simulatees start wondering: how could we trick Suzie so her simulation reveals the reset?!

I think you agree their question will be futile; once you reset your simulation, surely they'll not be able to detect it: while setting up the simulation might be complex, reinitialize at a given state successfully, with no traces within the simulated system, seems like the simplest task of it all.

And so, I'd argue, we might well expect it to be also in our (potential) simulation, however smart your reset-detection design might be.

My impression is, what you propose to supersede Utilitarianism with, is rather naturally already encompassed by utilitarianism. For example, when you write

If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions.

I disagree that typical concepts of utilitarianism - not strawmans thereof - are in anyway "stuck" here at all: "Of course," a classical utilitarian might well tell you, "we'll have to trade-off between the candy bar and the fatness it provides, that is exactly what utilitarianism is about". And you can extend that to also other nuances you bring: whatever, ultimately, we desire or prefer or what-have-you most: As classical utilitarians we'd aim exactly at that, quasi by definition.

Thanks for the link to the interesting article!

Answer by FlorianH20

If I understand you correctly, what you describe seems a bit atypical, or at least not similar in all other people, indeed.

Fwiw, pure speculation: Maybe you learned very much from working on/examining advanced/codes type codes. So you learned to understand advanced concepts etc. But you mostly learned to code on the basis of already existing code/solutions.

Often, instead, when we systematically learn to code, we may learn bit by bit indeed from the most simple examples, and we don't just learn to understand them, but - a bit like when starting to learn basic math - we constantly are challenged to put the next element learned directly into practice, on our own. This ensures we master all that knowledge in a highly active way, rather than only  passively.

This seems to suggest there's a mechanistically simple yet potentially tedious path, for you to learn to more actively create solutions from scratch: Force yourself to start with the simplest things to code actively, from scratch, without looking at the solution first. Just start with a simple problem that 'needs a solution' and implement it. Gradually increase to more complexity. I guess it might require a lot of such training. No clue whether there's anything better.

Load More