FlorianH

Wiki Contributions

Comments

Sorted by

an AI system passing the ACT - demonstrating sophisticated reasoning about consciousness and qualia - should be considered conscious. [...] if a system can reason about consciousness in a sophisticated way, it must be implementing the functional architecture that gives rise to consciousness.

This is provably wrong. This route will never offer any test on consciousness:

Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you've ever even imagined!

For a given set of random variable draws R used in the randomized output generation of xAI's uttering, S the xAI structure you've designed (transformers neuron arrangements or so), T the training you had given it:

What is P(C | {xAI conscious, R, S, T})? It's 100%.

What is P(C | {xAI not conscious, R, S, T})? It's of course also 100%. Schneider's claims you refer to don't change that. You know you can readily track what the each element within xAI is mathematically doing, how the bits propagate, and, if examining it in enough detail, you'd find exactly the output you observe, without resorting to any concept of consciousness or whatever.

As the probability of what you observe is exactly the same with or without consciousness in the machine, there's no way to infer from xAI's uttering whether it's conscious or not.

Combining this with the fact that, as you write, biological essentialism seems odd too, does of course create a rather unbearable tension, that many may still be ignoring. When we embrace this tension, some see raise illusionism-type questions, however strange those may feel (and if I dare guess, illusionist type of thinking may already be, or may grow to be, more popular than the biological essentialism you point out, although on that point I'm merely speculating).

Assumption 1: Most of us are not saints.
Assumption 2: AI safety is a public good.[1]

[..simple standard incentives..]

Implication: The AI safety researcher, eventually finding himself rather too unlikely to individually be pivotal on either side, may rather 'rationally'[2] switch to ‘standard’ AI work.[3]

So: A rather simple explanation seems to suffice to make sense of the big picture basic pattern you describe.

 

Doesn't mean, the inner tension you point out isn't interesting. But I don't think very deep psychological factors needed to explain the general 'AI safety becomes AI instead' tendency, which I had the impression the post was meant to suggest.

  1. ^

    Or, unaligned/unloving/whatever AGI a public bad.

  2. ^

    I mean: individually ‘rational’ once we factor in another trait - Assumption 1b: The unfathomable scale of potential aggregate disutility from AI gone wrong, bottoms out into a constrained ‘negative’ individual utility in terms of the emotional value non-saint Joe places on it. So a 0.1 permille probability of saving the universe may individually rationally be dominated by mundane stuff like having an still somewhat cool and well-paying job or something.

  3. ^

    The switch may psychologically be even easier if the employer had started out as actually well-intent and may now still have a bit of an ambiguous flair.

FlorianH111

Called Windfall Tax

Random examples:

VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps

Reuters Windfall tax mechanisms on energy companies across Europe

Especially with the 2022 Ukraine energy prices, the notion's popularity spiked along.

Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.

I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy politics, not always as trivial as it might look on first sight, but feasible, and some countries managed to implement some in the energy crisis.

[..] requires eating the Sun, and will be feasible at some technology level [..]

Do we have some basic physical-feasibility insights on this or you just speculate?

Indeed the topic I've dedicated the 2nd part of the comment, as the "potential truth" how I framed it (and I have no particular objection to you making it slightly more absolutist).

This is interesting! And given you generously leave it rather open as to how to interpret it, I propose we should think the other way round than people usually might tend to, when seeing such results:

I think there's not even the slightest hint at any beyond-pure-base-physics stuff going on in LLMs revealing even any type of

phenomenon that resists [conventional] explanation

Instead, this merely reveals our limitations of tracking (or 'emphasizing with') well enough the statistics within the machine. We know we have just programmed and bite-by-bite-trained into it exactly every syllable the LLM utters. Augment your brain with a few extra neurons or transistors or what have you, and that smart-enough version of you would be capable of perfectly understanding why in response to the training you gave it, it spits out exactly the words it does.[1]

 

So, instead, it's interesting the other way round:

Realizations you describe could be a step closer to showing how a simple pure basic machine can start to be 'convinced' it has intrinsic value and so on -  just the way we all are convinced of having that.

So AI might eventually bring illusionism nearer to us, even if I'm not 100% sure getting closer to that potential truth ends well for us. Or that, anyway, we'd really be able to fully buy into the it even if it were to become glaringly obvious to any outsider observing us.

  1. ^

    Don't misread that as me saying it's anyhow easy... just, in the limit, basic (even if insanely large scale and convoluted maybe) tracking of the mathematics we put in would really bring us there. So, admittedly, don't take literally 'a few' more neurons to help you, but instead a huge ton..

Indeed. I though it to be relatively clear with "buy" I meant to mostly focus on things we typically explicitly buy with money (for brevity even for these I simplified a lot, omitting that shops are often not allowed to open 24/7, some things like alcohol aren't sold to people of all ages, in some countries not sold in every type of shop, and/or or not at all times).

Although I don't want to say that exploring how to port the core thought to broader categories of exchanges/relationships couldn't bring interesting extra insights.

I cannot say I've thought about it deep enough, but I've thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:

A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:

  1. Simply tax high revenues or profits.
    1. No need to tax "AI (developers?)"/"bots" specifically.
    2. In fact, if AIs remain rather replicable/if we have many competing instances: Scarcity rents will be in raw factors (e.g. ores and/or land) rather than the algorithms used to processing them
  2. UBI to the people.
  3. International tax (and migration) coordination as essential.
    1. Else, especially if it's perfectly mobile AIs that earn the scarcity rents, we end up with one or a few tax havens that amass & keep the wealth to them
    2. If you have good international coordination, and can track revenues well, you may use very high tax rates, and correspondingly spread a very high share of global value added with the population.
  4. If specifically world economy will be dominated by platform economies, make sure we deal properly with it, ensuring there's competition instead of lock-in monopoly
    1. I.e. if, say, we'd all want to live in metaverses, avoid everyone being forced to live in Meta's instead of choosing freely among competing metaverses.

Risks include:

  1. Expect geographic revenue distribution to be foreign to us today, and potentially more unequal with entire lands with zero net contribution in terms of revenue-earning value added
    1. Maybe ores (and/or some types of land) will capture the dominant share of value added, not anymore the educated populations
    2. Maybe instead it's a monopoly or oligopoly, say with huge shares in Silicon Valley and/or its Chinese counterpart or what have you
    3. Inequality might exceed today's: Today poor people can become more attractive by offering cheap labor. Tomorrow, people deprived of valuable (i) ores or so, or (ii) specific, scarcity-rent earning AI capabilities, may be able to contribute zero, so have zero raw earnings
  2. Our rent-seeking economic lobbies who successfully put their agents at top policy-makers in charge, and who lead us to voting for antisocial things, will have ever stronger incentive to keep rents for themselves. Stylized example: We'll elect the supposedly-anti-immigration populist, but whose main deed is to make sure firms don't pay high enough taxes
  3. You can more easily land-grab than people-grab by force, so may expect military land conquest to become more a thing than in the post-war decades where minds seemed the most valuable thing
  4. Human psychology. Dunno what happens with societies with no work (though I guess we're more malleable, able to evolve into a society that can cope with it, than some people think, tbc)
  5. Trade unions and alike, trying to keep their jobs somehow, and finding pseudo-justifications for it, so the rest of society lets them do that.

 

B. Specifically to your following point:

I don't think the math works out if / when AI companies dominate the economy, since they'll capture more and more of the economy unless tax rates are high enough that everyone else receives more through UBI than they're paying the AI companies.

Imagine it's really at AI companies where the scarcity rents i.e. profits, occur (as mentioned, that's not at all clear): Imagine for simplicity all humans still want TVs and cars, maybe plus metaverses, and AI requires Nvidia cards. By scenario definition, AI produces everything, and as in this example we assume it's not the ores that earn the scarcity rents, and the AIs are powerful in producing stuff from raw earth, we don't explicitly track intermediate goods other than Nvidia cards the AIs produce too. Output be thus:

AI output = 100 TVs, 100 cars, 100 Nvidia cards, 100 digital metaverses, say in $bn.

Taxes = Profit tax = 50% (could instead call it income tax for AI owners; in reality would all be bit more complex, but overall doesn't matter much).

AI profit 300 ( = all output minus the Nividia cards)
People thus get $150bn; AI owners get $150bn as distributed AI profit after taxes
People consume 50 TVs, 50 cars, 50 digital metaverses
AI owners also consume 50 TVs, 50 cars, 50 digital metaverses

So you have a 'normal' circular economy that works. Not so normal, e.g. we have simplified for AI to require not only no labor but also no raw resources (or none with scarcity rent captured by somebody else). You can easily extend it to more complex cases.

In reality, of course, output will be adjusted, e.g. with different goods the rich like to consume instead of thousands of TVs per rich person, as happens already today in many forms; what the rich like to do with the wealth remains to be seen. Maybe fly around (real) space. Maybe get better metaverses. Or employ lots of machines to improve their body cells.

 

C. Btw, the "we'll just find other jobs" imho is indeed overrated, and I think the bias, esp. among economists, can be very easily explained when looking at history (where these economists had been spot on) yet realizing, that in future, machines will not anymore augment brains but replace them instead.

I find things as "Gambling Self-Exclusion Schemes" of multiple countries, thanks for the hint, indeed a good example, corroborating that at least in some of the most egregious examples of addictive goods unleashed on the population some action in in the suggested direction is technically & politically feasible - how successful tbc; looking fwd to looking into it in more detail!

Depends on what we call super-dumb - or what where we draw the system borders of "society". I include the special interest groups as part of our society; and are the small wheel in it gearing us towards the 'dumb' outcome in the aggregate. But yes, the problem is simply not trivial, smart/dumb is too relative, so my term was not useful (just expressing my frustration with our policies & thinking, that your nice post reminded me of)

Load More