Assumption 1: Most of us are not saints.
Assumption 2: AI safety is a public good.[1]
[..simple standard incentives..]
Implication: The AI safety researcher, eventually finding himself rather too unlikely to individually be pivotal on either side, may rather 'rationally'[2] switch to ‘standard’ AI work.[3]
So: A rather simple explanation seems to suffice to make sense of the big picture basic pattern you describe.
Doesn't mean, the inner tension you point out isn't interesting. But I don't think very deep psychological factors needed to explain the g...
Called Windfall Tax
Random examples:
VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps
Reuters Windfall tax mechanisms on energy companies across Europe
Especially with the 2022 Ukraine energy prices, the notion's popularity spiked along.
Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.
I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy...
It's a pretty straightforward modification of the Caplan thruster. You scoop up bits of sun with very strong magnetic fields, but rather than fusing it and using it to move a star, you cool most of it (firing some back with very high velocity to balance things momentum wise) and keep the matter you extract (or fuse some if you need quick energy). There's even a video on it! Skip to 4:20 for the relevant bit.
This is interesting! And given you generously leave it rather open as to how to interpret it, I propose we should think the other way round than people usually might tend to, when seeing such results:
I think there's not even the slightest hint at any beyond-pure-base-physics stuff going on in LLMs revealing even any type of
phenomenon that resists [conventional] explanation
Instead, this merely reveals our limitations of tracking (or 'emphasizing with') well enough the statistics within the machine. We know we have just programmed and bite-by-bite-trained in...
Indeed. I though it to be relatively clear with "buy" I meant to mostly focus on things we typically explicitly buy with money (for brevity even for these I simplified a lot, omitting that shops are often not allowed to open 24/7, some things like alcohol aren't sold to people of all ages, in some countries not sold in every type of shop, and/or or not at all times).
Although I don't want to say that exploring how to port the core thought to broader categories of exchanges/relationships couldn't bring interesting extra insights.
I cannot say I've thought about it deep enough, but I've thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:
A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:
I find things as "Gambling Self-Exclusion Schemes" of multiple countries, thanks for the hint, indeed a good example, corroborating that at least in some of the most egregious examples of addictive goods unleashed on the population some action in in the suggested direction is technically & politically feasible - how successful tbc; looking fwd to looking into it in more detail!
Depends on what we call super-dumb - or what where we draw the system borders of "society". I include the special interest groups as part of our society; and are the small wheel in it gearing us towards the 'dumb' outcome in the aggregate. But yes, the problem is simply not trivial, smart/dumb is too relative, so my term was not useful (just expressing my frustration with our policies & thinking, that your nice post reminded me of)
This is a good topic for exploration, though I don't have much belief that there's any feasible implementation "at a societal level".
Fair. I have instead the impression I see plenty of avenues. Bit embarrassingly: they are so far indeed not sufficiently structured in my head, require more detailed tinkering out, exploring failure modes and avenues for addressing in detail, plus they might, require significant restructuring of the relevant markets, and, worst, I have insufficient time to explore them in much detail quite now). But yes, it would...
Spot on! Let's zoom out and see we have (i) created a never before seen food industry that could feed us healthily at unprecedentedly low cost, yet (ii) we end up systematically killing us with all that. We're super dumb as society to continue doing as if nothing, nothing on a societal level, had to be done.
Btw, imho a more interesting, but not really much more challenging, extension of your case is, if overall what the orphans produce is actually very valuable, say creating utility of 500 $/day for ultimate consumers, but mere market forces, competition between the firms or businessmen, means market prices for the goods produced become still only 50.01c/day, while the labor market clearing wage for the destitute orphans is 50c/day.
Even in this situation, commonsense 'exploitation' is straightforward applicable and +- intelligible a concept:
...If there's a situation where a bunch of poor orphans are employed for 50c per grueling 16 hour work day plus room and board, then the fact that it might be better than starving to death on the street doesn't mean it's as great as we might wish for them. We might be sad about that, and wish they weren't forced to take such a deal. Does that make it "exploitation?" in the mind of a lot of people, yeah. Because a lot of people never make it further than "I want them to have a better deal, so you have to give it to them" -- even if it turns out they're only cr
If a rich person wants to help the poor, it will be more effective so simply help the poor -- i.e. with some of their own resources. Trying to distort the market leads to smaller gains from trade which could be used to help the poor. So far so good.
I think we agree on at least one of the main points thus.
Regarding
"Should" is a red flag word
I did not mean to invoke a particularly heavy philosophical absolutist 'ought' or anything like that, with my "should". It was instead simply a sloppy shortcut - and you're right to call that out - to say the banal: the ...
Your post introduces a thoughtful definition of exploitation, but I don’t think narrowing the definition is necessary. The common understanding — say "gaining disproportionate benefit from someone’s work because their alternatives are poor" or so — is already clear and widely accepted. The real confusion lies in how exploitation can coexist with voluntary, mutually beneficial trade. This coexistence is entirely natural and doesn’t require resolution — they are simply two different questions. Yet neither Econ 101 nor its critics seem to recognize this.
Econ ...
Would you personally answer Should we be concerned about eating too much soy? with "Nope, definitely not", or do you just find it's a reasonable gamble to take to eat the very large qty of soy you describe?
Btw, thanks a lot for the post; MANY parallels with my past as more-serious-but-uncareful-vegan until body showed clear signs of issues that I realized only late as I'd have never believed anyone that healthy vegan diet is that tricky.
Not all forms of mirror biology would even need to be restricted. For instance, there are potential uses for mirror proteins, and those can be safely engineered in the lab. The only dangerous technologies are the creation of full mirror cells, and certain enabling technologies which could easily lead to that (such as the creation of a full mirror genome or key components of a proteome).
Once we get used to create and deal with mirror proteins, and once we get used to designing & building cells, which I don't know when it happens, maybe adding 1+1 togeth...
Taking what you write as excuse to nerd a bit about Hyperbolic Discounting
One way to paraphrase esp. some of your ice cream example:
Hyperbolic discounting - the habit of valuing this moment a lot while abruptly (not smoothly exponentially) discounting everything coming even just a short while after - may in a technical sense be 'time inconsistent', but it's misguided to call it 'irrational' in the common usage of the term: My current self may simply care about itself distinctly more than about the future selves, even if some of these future selves are fort...
Spurious correlation here, big time, imho.
Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.
I think it's mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.
Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.
Wouldn't automatically abolish all requirements; maybe I'm not good enough in searching but to the degree I'm not an outlier:
This seems underappreciated here.
Flower safety isn't a thing. But having t...
Great you bring up Hoffman; I think he deserves serious pushback.
He proofs exactly two things:
Before I explain thi...
Musings about whether we should have a bit more sympathy for skepticism re price gauging, despite all. Admittedly with no particular evidence to point to; keen to see whether my basic skepticism could easily be dismissed.
...Scott Sumner points out that customers very much prefer ridesharing services that price gouge and have flexible pricing to taxis that have fixed prices, and very much appreciate being able to get a car on demand at all times. He makes the case that liking price gouging and liking the availability of rides during high demand are two sides o
Appreciate actually the overall take (although not sure how many would not have found most of it simply common sense anyway), but: A bit more caution with the stats would have been great
Agree that cued FNs would often be useful innovation I've not yet seen. Nevertheless, this statement
So, if you wonder whether you'd care for the content of a note, you have to look at the note, switching to the bottom of the page and breaking your focus. Thus the notion that footnotes are optional is an illusion.
ends with a false conclusion; most footnotes in text I have read were optional and I'm convinced I'm happy to not have read most of them indeed. FNs, already as they are, are thus indeed highly "optional" and potentially very helpful - in many, maybe most, cases, for many, maybe most, readers.
A detail: I find the "much harder" in the following unnecessarily strong, or maybe also simply the 'moral claim' yes/no too binary (all emphasizes added):
...If the rich generally do not have a moral claim to their riches, then the only justification needed to redistribute is a good affirmative reason to do so: perhaps that the total welfare of society would improve [..]
If one believes that they generally do have moral claim, then redistributive taxation becomes much harder to justify: we need to argue either that there is a sufficiently strong affirmative rea
Core claim in my post is that the 'instantaneous' mind (with its preferences etc., see post) is - if we look closely and don't forget to keep a healthy dose of skepticism about our intuitions about our own mind/self - sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or 'simpler' and therefore more compelling (Occam's razor), as I meant to explain in the post.
As I...
I'm sorry but I find you're nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:
A)
The word "just" in the sense used here is always a danger sign. "X is just Y" means "X is Y and is not a certain other thing Z", but without stating the Z.
... 'just' might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the 'just' (see "w/o meaning you're ...").
B)
...You quoting me: "It is equally all too natural for me to
Thanks! In particular also for your more-kind-than-warranted hint at your original w/o accusing me of theft!! Especially as I now realize (or maybe realize again) your sleep-clone-swap example, which indeed I love as an perfectly concise illustration, had also come along with at least an "I guess"-caveated "it is subjective", i.e. which some sense is really already included a core part of the conclusion/claim here.
I should have also picked up your 'stream-of-consciousness continuity' vs. 'substrate/matter continuity' terminology. Finally, the Ship of These...
Btw, regarding:
it would not seem to have made any difference and was just a philosophical recreation
Mind, in this discussion about cloning thought experiments I'd find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.
That said, I guess I'm not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.
I obviously still care for tomorrow, as is perfectly in line with the theory.
I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: "why care then, even about 'my' own future?!". This is absolutely not implied:
The questions around which we can get "resolving peace" (see context above!) refers to things like: If someone came along proposing to clone/transmit/... you, what to do? We may of course find peace about that question (whi...
The original mistake is that feeling of a "carrier for identity across time" - for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further "carrier for identity", i.e. without any super-material valuable extra soul, there is resolving peace about this question.
Good decisions need to be based on correct beliefs as well as values.
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future "self", is nothing supernatural i.e. no super-material unified continuous self of extra value: we don't have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as 'moving' (not to say 'alive'); and too well we know we could theoretically make you feel you've experienced your p...
Oh, it's much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Nice challenge! There's no "epistemic relativism" here, even if I see where you're coming from.
First recall the broader altruism analogy: Would you say it's epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway co...
As I write, call it a play on words; a question of naming terms - if you will. But then - and this is just a proposition plus a hypothesis - try to provide a reasonable way to objectively define what one 'ought' to care about in cloning scenarios; and contemplate all sorts of traditionally puzzling thought experiments about neuron replacements and what have you, and you'll inevitable end up with hand-waving, stating arbitrary rules that may seem to work (for many, anyhow) in one though experiment, just to be blatantly broken by the next experiment... Do th...
Very interesting question to me coming from the perspective I outline in the post - sorry a bit lengthy answer again:
According to the basic take from the post, we're actually +- in your universe, except that the self is even more ephemeral than you posit. And as I argue, it's relative, i.e. up to you, which future self you end up caring about in any nontrivial experiment.
Trying to re-frame your experiment from that background as best as I can, I imagine a person having an inclination to think of 'herself' (in sloppy speak; more precisely: she cares about.....
Yep.
And the clue is, the exceptional one refusing, saying "this won't be me, I dread the future me* being killed and replaced by that one", is not objectively wrong. It might quickly become highly impractical for 'him'** not to follow the trend, but if his 'self'-empathy is focused only on his own direct physical successors, it is in some sense actually killing him if we put him in the machine. We kill him, and we create a person that's not him in the relevant sense, as he's currently not accepting the successor; if his empathic weight is 100% on his own d...
All agreeable. Note, this is perfectly compatible with the relativity theory I propose, i.e. with the 'should' being entirely up to your intuition only. And, actually, the relativity theory, I'd argue, is the only way to settle debates you invoke, or, say, to give you peace of mind when facing these risky uploading situations.
Say, you can overnight destructively upload, with 100% reliability your digital clone will be in a nicely replicated digital world for 80 years (let's for simplicity assume for now the uploadee can be expected to be a consciousness co...
The point is, "you" are exactly the following and nothing else: You're (i) your mind right now, (ii) including its memory, and (iii) its forward-looking care, hopes, dreams for, in particular, its 'natural' successor. Now, in usual situations, the 'natural successor' is obvious, and you cannot even think of anything else: it's the future minds that inhabit your body, your brain, that's why you tend to call the whole series a unified 'you' in common speak.
Now, with cloning, if you absolutely care for a particular clone, then, for every purpose, you can exte...
I wonder whether, if sheer land mass really was the single dominant bottleneck for whatever your aims, you could potentially find a particular gov't or population from whom you'd buy the km2 you desire - say, for a few $ bn - as new sovereign land for you, for a source of potentially (i) even cheaper and (ii) more robust land to reign over?
Difficult to overstate the role of signaling as a force in human thinking, indeed, few random examples:
I read this as saying we’re somehow not ‘true’ to ourselves as we’re doing stuff nature didn’t mean us to do when it originally implanted our emotions.
Indeed, we might look ridiculous from the outside, but who’s there to judge - imho, nature is no authority.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don't actually believe, but cannot logically dismiss, is that if you're going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).
Thanks for the useful overview! Tiny point:
It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? [...] Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes [... and so on]
Comparing Israel...
Might be worth adding your blog post's subtitle or so, to hint at what Georgism is about (assuming I'm not an exception in not having known "Georgism" is the name for the idea of shifting taxation from labor etc. to natural resources).
Worth adding imho: Feels like a most natural way to do taxation in a world with jobs automated away.
Three related effects/terms:
1. Malthusian Trap as the maybe most famous example.
2. In energy/environment we tend to refer to such effects as
No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.
This is provably wrong. This route will never offer any test on consciousness:
Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you've ever even imagined... (read more)