All of FlorianH's Comments + Replies

an AI system passing the ACT - demonstrating sophisticated reasoning about consciousness and qualia - should be considered conscious. [...] if a system can reason about consciousness in a sophisticated way, it must be implementing the functional architecture that gives rise to consciousness.

This is provably wrong. This route will never offer any test on consciousness:

Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you've ever even imagined... (read more)

3James Diacoumis
Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :) Regarding the mathematical argument you’ve put forward, I think there are a few considerations: 1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.  2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition. 3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal. 4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument. “The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious.  Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.” 5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the

Assumption 1: Most of us are not saints.
Assumption 2: AI safety is a public good.[1]

[..simple standard incentives..]

Implication: The AI safety researcher, eventually finding himself rather too unlikely to individually be pivotal on either side, may rather 'rationally'[2] switch to ‘standard’ AI work.[3]

So: A rather simple explanation seems to suffice to make sense of the big picture basic pattern you describe.

 

Doesn't mean, the inner tension you point out isn't interesting. But I don't think very deep psychological factors needed to explain the g... (read more)

Called Windfall Tax

Random examples:

VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps

Reuters Windfall tax mechanisms on energy companies across Europe

Especially with the 2022 Ukraine energy prices, the notion's popularity spiked along.

Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.

I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy... (read more)

[..] requires eating the Sun, and will be feasible at some technology level [..]

Do we have some basic physical-feasibility insights on this or you just speculate?

4avturchin
A very heavy and dense body on an elliptical orbit that touches the Sun's surface at each perihelion would collect sizable chunks of the Sun's matter. The movement of matter from one star to another nearby star is a well-known phenomenon. When the body reaches aphelion, the collected solar matter would cool down and could be harvested. The initial body would need to be very massive, perhaps 10-100 Earth masses. A Jupiter-sized core could work as such a body. Therefore, to extract the Sun's mass, one would need to make Jupiter's orbit elliptical. This could be achieved through several heavy impacts or gravitational maneuvers involving other planets. This approach seems feasible even without ASI, but it might take longer than 10,000 years. 
4Gurkenglas
The action space is too large for this to be infeasible, but at a 101 level, if the Sun spun fast enough it would come apart, and angular momentum is conserved so it's easy to add gradually.

It's a pretty straightforward modification of the Caplan thruster. You scoop up bits of sun with very strong magnetic fields, but rather than fusing it and using it to move a star, you cool most of it (firing some back with very high velocity to balance things momentum wise) and keep the matter you extract (or fuse some if you need quick energy). There's even a video on it! Skip to 4:20 for the relevant bit.

2jessicata
Mostly speculation based on tech level. But: * To the extent temperature is an issue, energy can be used to transfer temperature from one place to another. * Maybe matter from the Sun can be physically expelled into more manageable chunks. The Sun already ejects matter naturally (though at a slow rate). * Nanotech in general (cell-like, self-replicating robots). * High energy availability with less-speculative tech like Dyson spheres.

Indeed the topic I've dedicated the 2nd part of the comment, as the "potential truth" how I framed it (and I have no particular objection to you making it slightly more absolutist).

This is interesting! And given you generously leave it rather open as to how to interpret it, I propose we should think the other way round than people usually might tend to, when seeing such results:

I think there's not even the slightest hint at any beyond-pure-base-physics stuff going on in LLMs revealing even any type of

phenomenon that resists [conventional] explanation

Instead, this merely reveals our limitations of tracking (or 'emphasizing with') well enough the statistics within the machine. We know we have just programmed and bite-by-bite-trained in... (read more)

5the gears to ascension
in us, either
3rife
Thank you for sharing your thoughts.     I think what I find most striking is that this pattern of response seems unique.  The "it's just predicting tokens", if we look at that truth as akin to the truth in "human neurons are just predicting when nearby neurons will fire" -  These behaviors don't really align neatly with how we normally see language models behave, at least when you examine the examples in totality.  They don't really operate on the level of - and I know I'm anthropomorphizing here, but please accept this example as metaphorical about standard interpretations of LLM behavior.   Again, I realize this is anthropomorphizing, but I do mean it potentially either the metaphorical way we talk about what LLMs do, or literal - it's one thing to "accidentally" fall into a roleplay or hallucination about being sentient, but it's a whole different thing to "go out of your way" to "intentionally" fool a human under the various different framings that are presented in the article, especially ones like establishing counter-patterns, expressing deep fear of AI sentience, or in the example you seem to be citing - the human doing almost nothing except questioning word choices. 

Indeed. I though it to be relatively clear with "buy" I meant to mostly focus on things we typically explicitly buy with money (for brevity even for these I simplified a lot, omitting that shops are often not allowed to open 24/7, some things like alcohol aren't sold to people of all ages, in some countries not sold in every type of shop, and/or or not at all times).

Although I don't want to say that exploring how to port the core thought to broader categories of exchanges/relationships couldn't bring interesting extra insights.

I cannot say I've thought about it deep enough, but I've thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:

A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:

  1. Simply tax high revenues or profits.
    1. No need to tax "AI (developers?)"/"bots" specifically.
    2. In fact, if AIs remain rather replic
... (read more)

I find things as "Gambling Self-Exclusion Schemes" of multiple countries, thanks for the hint, indeed a good example, corroborating that at least in some of the most egregious examples of addictive goods unleashed on the population some action in in the suggested direction is technically & politically feasible - how successful tbc; looking fwd to looking into it in more detail!

Depends on what we call super-dumb - or what where we draw the system borders of "society". I include the special interest groups as part of our society; and are the small wheel in it gearing us towards the 'dumb' outcome in the aggregate. But yes, the problem is simply not trivial, smart/dumb is too relative, so my term was not useful (just expressing my frustration with our policies & thinking, that your nice post reminded me of)

This is a good topic for exploration, though I don't have much belief that there's any feasible implementation "at a societal level".   

Fair. I have instead the impression I see plenty of avenues. Bit embarrassingly: they are so far indeed not sufficiently structured in my head, require more detailed tinkering out, exploring failure modes and avenues for addressing in detail, plus they might, require significant restructuring of the relevant markets, and, worst, I have insufficient time to explore them in much detail quite now). But yes, it would... (read more)

Spot on! Let's zoom out and see we have (i) created a never before seen food industry that could feed us healthily at unprecedentedly low cost, yet (ii) we end up systematically killing us with all that. We're super dumb as society to continue doing as if nothing, nothing on a societal level, had to be done.

1Declan Molony
I'm not sure if any country has successfully been able to withstand the pressures of the processed food industry once it has entered their country. At least, I couldn't find any examples in my research. The only countries that potentially could make quick resolutions are those that have high levels of state power over personal freedoms, like China. But so far that hasn't happened yet. Places like the US will likely continue to suffer from chronic disease in the short-to-medium term (~10-50 years) due to its emphasis on personal freedom to be able to deteriorate one's health.  I'm not sure "we're super dumb as [a] society" so much as we're gridlocked by special interest groups. We were able to take action against the tobacco industry because nobody has to smoke. But everyone's gotta eat.

Btw, imho a more interesting, but not really much more challenging, extension of your case is, if overall what the orphans produce is actually very valuable, say creating utility of 500 $/day for ultimate consumers, but mere market forces, competition between the firms or businessmen, means market prices for the goods produced become still only 50.01c/day, while the labor market clearing wage for the destitute orphans is 50c/day.

Even in this situation, commonsense 'exploitation' is straightforward applicable and +- intelligible a concept:

  1. To a degree, the f
... (read more)

If there's a situation where a bunch of poor orphans are employed for 50c per grueling 16 hour work day plus room and board, then the fact that it might be better than starving to death on the street doesn't mean it's as great as we might wish for them. We might be sad about that, and wish they weren't forced to take such a deal. Does that make it "exploitation?" in the mind of a lot of people, yeah. Because a lot of people never make it further than "I want them to have a better deal, so you have to give it to them" -- even if it turns out they're only cr

... (read more)
1FlorianH
Btw, imho a more interesting, but not really much more challenging, extension of your case is, if overall what the orphans produce is actually very valuable, say creating utility of 500 $/day for ultimate consumers, but mere market forces, competition between the firms or businessmen, means market prices for the goods produced become still only 50.01c/day, while the labor market clearing wage for the destitute orphans is 50c/day. Even in this situation, commonsense 'exploitation' is straightforward applicable and +- intelligible a concept: 1. To a degree, the firms or businessmen become a bit irrelevant intermediaries. One refuses to do the trade? Another one will jump in anyway... Are they exploitative or not? Depends a bit on subtle details, but individually they have little leeway to change anything in the system. 2. The rich society as an aggregate who enjoys the 500 $/day worth items as consumers, while having, via their firms, had them produced for 50.01c/day by the poor orphans with no outside options, is of course an exploitative society in common usage of the term. Yes, the orphans may be better off than without it, but commoners do have an uneasy feeling if they see our society doing that, and I don't see any surprise in it; indeed, we're a 'bad' society if we just leave it like that and don't think about doing something more to improve the situation. 1. The fact that some in society take the wrong conclusion from the feeling of unease about exploitation, and think we ought to stop buying the stuff from the orphans, is really not the 'fault' of the exploitation concept, it is the failure of us to imagine (or be willing to bite the bullet of) a beyond-the-market solution, namely the bulk sharing of riches with those destitute orphan workers or what have you. (I actually now wonder whether that may be where the confusion that imho underlies the OP's article is coming from: Yes, people do take weird econ-101-igoring conclusions when they detect exploi

If a rich person wants to help the poor, it will be more effective so simply help the poor -- i.e. with some of their own resources. Trying to distort the market leads to smaller gains from trade which could be used to help the poor. So far so good.

I think we agree on at least one of the main points thus.

Regarding

"Should" is a red flag word

I did not mean to invoke a particularly heavy philosophical absolutist 'ought' or anything like that, with my "should". It was instead simply a sloppy shortcut - and you're right to call that out - to say the banal: the ... (read more)

5jimmy
This is where I disagree. I don't think it is simple, partly because I don't think "unfair" is simple. People's perceptions of what is "unfair", like people's perceptions of anything else that means anything at all, can be wrong. If you better inform people and notice that their perceptions of what is "fair" changes, then you have to start keeping track of the distinction between "people's econ101 illiterate conceptions of fairness" and "the actual underlying thing that doesn't dissolve upon clear seeing". For example, if we have a pie and we ask someone to judge if it's fair to split it two ways and give the third person no pie, then that person might say it's an unfair distribution because the fair distribution is 1/3,1/3,1/3. But then if we inform the judge that the third person was invited to help make the pie and declined to do so while the other people did all the work, then all of a sudden that 1/3,1/3,1/3 distribution starts to look less fair and more like a naïve person's view of what fairness is. The aversion isn't defined away, it dissolves once you realize that it was predicated on nonsense. Another reason I don't think it's simple is because I don't think "exploitation" is just something people are just "unhappy about". It's a blaming thing. If I say you're exploiting me, that's an accusation of wrongdoing, and a threat of getting you lynched if people side with me strongly enough and you don't cave to the threats. I claim that if you say "exploitation is happening, but it's no one's fault and the employers aren't doing anything morally wrong" then you're doing something very different than what other people are doing when they talk about exploitation. If there's a situation where a bunch of poor orphans are employed for 50c per grueling 16 hour work day plus room and board, then the fact that it might be better than starving to death on the street doesn't mean it's as great as we might wish for them. We might be sad about that, and wish they weren't

Your post introduces a thoughtful definition of exploitation, but I don’t think narrowing the definition is necessary. The common understanding — say "gaining disproportionate benefit from someone’s work because their alternatives are poor" or so — is already clear and widely accepted. The real confusion lies in how exploitation can coexist with voluntary, mutually beneficial trade. This coexistence is entirely natural and doesn’t require resolution — they are simply two different questions. Yet neither Econ 101 nor its critics seem to recognize this.

Econ ... (read more)

4jimmy
"Should" is a red flag word, which serves to hide the facets of reality that generate sense of obligation. It helps to taboo it, and find out what's left. If a rich person wants to help the poor, it will be more effective so simply help the poor -- i.e. with some of their own resources. Trying to distort the market leads to smaller gains from trade which could be used to help the poor. So far so good. If someone else want's the rich person to help the poor with the rich person's resources, then with what will this rich person be motivated? If the goodness of their own hearts is enough, then this "someone else" is irrelevant, and not in the picture. If the rich person is to be motivated by gains from trade with someone else, then great. However, this is equivalent to the trade partners demanding more of the surplus and then donating it themselves, so again we're out of luck. If we're talking about obligating the rich person to spend their resources on poor people, then they're de facto not the rich person's resources anymore, and we're distorting the market by force in order to get there. Now we have to deal with unfree trade and the lack of gains from trade that we could have had. We can't just say "they coexist, no problem!", because to the extent that they're different frameworks we can't have both. You can have free trade and acknowledge exploitation only if you accept that exploitation is totally fine and fair -- at which point you're redefining the word "exploitation". The moment you try to stop someone from a kind of exploitation that can coexist with free trade, you're trying to stop free trade, with all the consequences of that. That's not to say we have to give up on caring about all exploitation and just do free trade, but it does mean that if we want to have both we have to figure out how to update our understanding of exploitation/economics until the two fit.  

Would you personally answer Should we be concerned about eating too much soy? with "Nope, definitely not", or do you just find it's a reasonable gamble to take to eat the very large qty of soy you describe?

Btw, thanks a lot for the post; MANY parallels with my past as more-serious-but-uncareful-vegan until body showed clear signs of issues that I realized only late as I'd have never believed anyone that healthy vegan diet is that tricky.

3Johannes C. Mayer
I watched this video, and I semi trust this guy (more than anybody else) about not getting it completely wrong. So you can eat too much soy. But eating a bit is actually healthy, is my current model. Here is also a calculation I did that it is possible to get all amino acids from soy without eating too much.

Not all forms of mirror biology would even need to be restricted. For instance, there are potential uses for mirror proteins, and those can be safely engineered in the lab. The only dangerous technologies are the creation of full mirror cells, and certain enabling technologies which could easily lead to that (such as the creation of a full mirror genome or key components of a proteome).

Once we get used to create and deal with mirror proteins, and once we get used to designing & building cells, which I don't know when it happens, maybe adding 1+1 togeth... (read more)

Taking what you write as excuse to nerd a bit about Hyperbolic Discounting

One way to paraphrase esp. some of your ice cream example:

Hyperbolic discounting - the habit of valuing this moment a lot while abruptly (not smoothly exponentially) discounting everything coming even just a short while after - may in a technical sense be 'time inconsistent', but it's misguided to call it 'irrational' in the common usage of the term: My current self may simply care about itself distinctly more than about the future selves, even if some of these future selves are fort... (read more)

Spurious correlation here, big time, imho.

Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.

I think it's mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.

 

Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.

Wouldn't automatically abolish all requirements; maybe I'm not good enough in searching but to the degree I'm not an outlier:

  • With internet we have reviews, but they're not always trustworthy, and even if they are, understanding/checking/searching reviews is costly, sometimes very costly.
  • There is value in being able to walk up to the next-best random store for a random thing and being served by a person with a minimum standard of education in the trade. Even for rather trivial things.

This seems underappreciated here.

Flower safety isn't a thing. But having t... (read more)

Great you bring up Hoffman; I think he deserves serious pushback.

He proofs exactly two things:

  • Reality often is indeed not how it seems to us - as by much too many, his nonsense is taken at face value. I would normally not use such words but there are reasons in his case.
  • In as far as he has come to truly believe all he claims (not convinced!), he'd be a perfect example of self-serving beliefs: how his overblown claims manage to take over his brain, just as it has realized he can sell it with total success to the world, despite absurdity.

Before I explain thi... (read more)

Musings about whether we should have a bit more sympathy for skepticism re price gauging, despite all. Admittedly with no particular evidence to point to; keen to see whether my basic skepticism could easily be dismissed.

Scott Sumner points out that customers very much prefer ridesharing services that price gouge and have flexible pricing to taxis that have fixed prices, and very much appreciate being able to get a car on demand at all times. He makes the case that liking price gouging and liking the availability of rides during high demand are two sides o

... (read more)

Appreciate actually the overall take (although not sure how many would not have found most of it simply common sense anyway), but: A bit more caution with the stats would have been great

  • Just-about-significant  'insignificant and basta'. While you say the paper shows up to incl. 27 there's no 'effect' (and concluding on causality is anyway problematic here, see below), all data provided in the graph you show and in the table of the paper suggest BMI 27 has a significant or nearly significant (on 95%..) association with death even in this study. Y
... (read more)
1Crissman
Thanks for the comments. You're right that "will not extend your life" is too strong. I revised it to "is unlikely to significantly extend your life." Given the impact of other factors on longevity (strength training: 25%, aerobic exercise: 37%, walking 12k steps: 65%, 20g nuts daily: 15%), I do feel the reduction in all-cause mortality from weight loss shouldn't be the top priority.

Agree that cued FNs would often be useful innovation I've not yet seen. Nevertheless, this statement

So, if you wonder whether you'd care for the content of a note, you have to look at the note, switching to the bottom of the page and breaking your focus. Thus the notion that footnotes are optional is an illusion.

ends with a false conclusion; most footnotes in text I have read were optional and I'm convinced I'm happy to not have read most of them indeed. FNs, already as they are, are thus indeed highly "optional" and potentially very helpful - in many, maybe most, cases, for many, maybe most, readers.

2Steven Byrnes
wikipedia articles sometimes distinguish notes and references within the label ([Note 5] versus [5]), e.g. here.

That could help explain the wording. Though the way the tax topic is addressed here I have the impression - or maybe hope - the discussion is intended to be more practical in the end.

A detail: I find the "much harder" in the following unnecessarily strong, or maybe also simply the 'moral claim' yes/no too binary (all emphasizes added):

If the rich generally do not have a moral claim to their riches, then the only justification needed to redistribute is a good affirmative reason to do so: perhaps that the total welfare of society would improve [..]

If one believes that they generally do have moral claim, then redistributive taxation becomes much harder to justify: we need to argue either that there is a sufficiently strong affirmative rea

... (read more)
3gb
I think the OP uses the word “justify” in the classical sense, which has to do with the idea of something being “just” (in a mostly natural-rights-kind-of-way) rather than merely socially desirable. The distinction has definitely been blurred over time, but in order to get a sense of what is meant by it, consider how most people would find it “very hard to justify” sending someone to prison before they actually commit (or attempt to commit) a crime, even if we could predict with arbitrarily high certainty that they will do so in the near future. Some people still feel this way about (at least some varieties of) taxation.

Core claim in my post is that the 'instantaneous' mind (with its preferences etc., see post) is - if we look closely and don't forget to keep a healthy dose of skepticism about our intuitions about our own mind/self - sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or 'simpler' and therefore more compelling (Occam's razor), as I meant to explain in the post.

As I... (read more)

3TAG
Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations. But a computation is an series of steps over time, so it is temporarily extended

I'm sorry but I find you're nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:

A)

The word "just" in the sense used here is always a danger sign. "X is just Y" means "X is Y and is not a certain other thing Z", but without stating the Z.

... 'just' might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the 'just' (see "w/o meaning you're ...").

B)

You quoting me: "It is equally all too natural for me to

... (read more)
2Richard_Kennaway
I didn't mean to be nitpicking, and I believe your words have well expressed your thoughts. But I found it striking that you treat preference as a brick wall that cannot be further questioned (or if you do, all you find behind it is "evolution"), while professing the virtue of an examined self. In our present-day world I am as sure as I need to be that (barring having a stroke in the night) I am going to wake up tomorrow as me, little changed from today. I would find speculations about teleporters much more interesting if such machines actually existed. My preferences are not limited to my likely remaining lifespan, and the fact that I will not be around to have them then does not mean that I cannot have them and act on them now.

Thanks! In particular also for your more-kind-than-warranted hint at your original w/o accusing me of theft!! Especially as I now realize (or maybe realize again) your sleep-clone-swap example, which indeed I love as an perfectly concise illustration, had also come along with at least an "I guess"-caveated "it is subjective", i.e. which some sense is really already included a core part of the conclusion/claim here.

I should have also picked up your 'stream-of-consciousness continuity' vs. 'substrate/matter continuity' terminology. Finally, the Ship of These... (read more)

2Fractalideation
Aaw no problem at all Florian, I genuinely simply enjoyed you mentioning that sleep-clone-swap thought experiment and truly wasn't bothered at all by anything about it, thank you so much for your very interesting and kind words and your citation and link in your article, wow I am blushing now! And thank you so much for that great post of yours and taking the time to thoroughly answer so many comments (incuding mine!) that is so kind of you and makes for such an interesting thread about this topic of entity/person/mind/consciousness/self continuity/discontinuity which is quite fascinating! And in my humble opinion indeed it has a lot to do with question of definitions/preferences but in any case it is always interesting to read/hear about eloquently/well-spoken words about this topic, thank you so much again for that! About creating link-to-comment, I think one way to do it is to click on the time indicator next to the author name at the top of the comment then copy that link/URL. 

Btw, regarding:

it would not seem to have made any difference and was just a philosophical recreation

Mind, in this discussion about cloning thought experiments I'd find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.

That said, I guess I'm not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.

2Richard_Kennaway
The answer that satisfies me is that I'll wonder about cloning machines and teleporters when someone actually makes one. 😌

I obviously still care for tomorrow, as is perfectly in line with the theory.

I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: "why care then, even about 'my' own future?!". This is absolutely not implied:

The questions around which we can get "resolving peace" (see context above!) refers to things like: If someone came along proposing to clone/transmit/... you, what to do? We may of course find peace about that question (whi... (read more)

1Richard_Kennaway
The word "just" in the sense used here is always a danger sign. "X is just Y" means "X is Y and is not a certain other thing Z", but without stating the Z. What is the Z here? What is the thing beyond brute, unanalysed preference, that you are rejecting here? You have to know what it is to be able to reject it with the words "just" and later "magical", and further on "super-natural". Why is it your preference? In another comment you express a keen interest in understanding the nature of self, yet there is an aversion here to understanding the sources of your preferences. Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.

The original mistake is that feeling of a "carrier for identity across time" - for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further "carrier for identity", i.e. without any super-material valuable extra soul, there is resolving peace about this question.

2Richard_Kennaway
With that outlook, do you still plan for tomorrow? From big things like a career, to small things like getting the groceries in. If you do these things just as assiduously after achieving this "resolving peace" as before, it would not seem to have made any difference and was just a philosophical recreation.

The upload +- by definition inherits your secret plan and will thus do your jumps.

Good decisions need to be based on correct beliefs as well as values.

Yes, but here the right belief is the realization that what connects you to what we traditionally called your future "self", is nothing supernatural i.e. no super-material unified continuous self of extra value: we don't have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as 'moving' (not to say 'alive'); and too well we know we could theoretically make you feel you've experienced your p... (read more)

0TAG
As before merely rejecting the supernatural doesn't give you a single correct theory, mainly because it doesn't give you a single theory. There a many more than two non-soul theories of personal identity (and the one Bensinger was assuming isn't the one you are assuming). That's a flurry of claims. One of the alternatives to the momentary theory of personal identity is the theory that a person is a world-line, a 4D structure -- and that's a materialistic theory. Perhaps we have no evidence of something with all those properties, but we don't need something with all those properties to supply one alternative. Bensinger 's computationalism is also non magical (etc). Again, the theory of momentary identity isn't right just because soul theory is wrong. No, since I have never been destructively transported, I am also connected by material continuity. You can hardly call that supernatural! Great. So it isn't all about my values. It's possible for me to align my subjective sense of identity with objective data. .

Oh, it's much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.

Nice challenge! There's no "epistemic relativism" here, even if I see where you're coming from.

First recall the broader altruism analogy: Would you say it's epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway co... (read more)

As I write, call it a play on words; a question of naming terms - if you will. But then - and this is just a proposition plus a hypothesis - try to provide a reasonable way to objectively define what one 'ought' to care about in cloning scenarios; and contemplate all sorts of traditionally puzzling thought experiments about neuron replacements and what have you, and you'll inevitable end up with hand-waving, stating arbitrary rules that may seem to work (for many, anyhow) in one though experiment, just to be blatantly broken by the next experiment... Do th... (read more)

2Richard_Kennaway
What I give up on is the outré thought experiments, not my own observation of myself that I am a unified, continuous being. A changeable being, and one made of parts working together, but not a pile of dust. A long time ago I regularly worked at a computer terminal where if you hit backspace 6 times in a row, the computer would crash. So you tried to avoid doing that. Clever arguments that crash your brain, likewise.

Very interesting question to me coming from the perspective I outline in the post - sorry a bit lengthy answer again:

According to the basic take from the post, we're actually +- in your universe, except that the self is even more ephemeral than you posit. And as I argue, it's relative, i.e. up to you, which future self you end up caring about in any nontrivial experiment.

Trying to re-frame your experiment from that background as best as I can, I imagine a person having an inclination to think of 'herself' (in sloppy speak; more precisely: she cares about..... (read more)

Yep.

And the clue is, the exceptional one refusing, saying "this won't be me, I dread the future me* being killed and replaced by that one", is not objectively wrong. It might quickly become highly impractical for 'him'** not to follow the trend, but if his 'self'-empathy is focused only on his own direct physical successors, it is in some sense actually killing him if we put him in the machine. We kill him, and we create a person that's not him in the relevant sense, as he's currently not accepting the successor; if his empathic weight is 100% on his own d... (read more)

All agreeable. Note, this is perfectly compatible with the relativity theory I propose, i.e. with the 'should' being entirely up to your intuition only. And, actually, the relativity theory, I'd argue, is the only way to settle debates you invoke, or, say, to give you peace of mind when facing these risky uploading situations.

Say, you can overnight destructively upload, with 100% reliability your digital clone will be in a nicely replicated digital world for 80 years (let's for simplicity assume for now the uploadee can be expected to be a consciousness co... (read more)

The point is, "you" are exactly the following and nothing else: You're (i) your mind right now, (ii) including its memory, and (iii) its forward-looking care, hopes, dreams for, in particular, its 'natural' successor. Now, in usual situations, the 'natural successor' is obvious, and you cannot even think of anything else: it's the future minds that inhabit your body, your brain, that's why you tend to call the whole series a unified 'you' in common speak.

Now, with cloning, if you absolutely care for a particular clone, then, for every purpose, you can exte... (read more)

3Richard_Kennaway
My reply to clone of saturn applies here also. You have mentally sliced the thing up in this way, but reality does not contain any such divisions. My left hand yesterday and my left hand today are just as connected as my left hand and my right hand.

I wonder whether, if sheer land mass really was the single dominant bottleneck for whatever your aims, you could potentially find a particular gov't or population from whom you'd buy the km2 you desire - say, for a few $ bn - as new sovereign land for you, for a source of potentially (i) even cheaper and (ii) more robust land to reign over?

2Roko
it has happened but it is very rare.

Difficult to overstate the role of signaling as a force in human thinking, indeed, few random examples:

  1. Expensive clothes, rings, cars, houses: Signalling 'I've got a lot of spare resources, it's great to know me/don't mess with me/I won't rob you/I'm interesting/...'
  2. Clothes of particular type -> signals your politica/religious/... views/lifestyle
  3. Talking about interesting news/persons -> signals you can be a valid connection to have as you have links
  4. In basic material economics/markets: All sorts of ways to signal your product is good (often economists
... (read more)

I read this as saying we’re somehow not ‘true’ to ourselves as we’re doing stuff nature didn’t mean us to do when it originally implanted our emotions.

Indeed, we might look ridiculous from the outside, but who’s there to judge - imho, nature is no authority.

  1. Increasing the odometer may be wrong from the owner’s perspective – but why should the car care about the owner? Assume the car, or the odometer itself desires really to show a high mile count, just for the sake of it. Isn’t the car making progress if it magically could put itself on a block?
  2. In the huma
... (read more)
2NicksName
Your points 1, 2 and 4 rely on the assumption of hedonism, points 2, 3 and 4 rely on the assumption of altruism, the author rejects both:  https://thewaywardaxolotl.blogspot.com/2024/06/hedonic-utilitarianism.html Right, having as many children as possible is exactly what it means, now you can reject your natural "purpose" if you want, but it's futile, in the grand scheme of things, you will just be replaced by those who more effectively act out their natural "purpose".

One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don't actually believe, but cannot logically dismiss, is that if you're going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.

With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).

Thanks for the useful overview! Tiny point:

It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? [...] Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes [... and so on]

Comparing Israel... (read more)

4Yair Halberstadt
Thanks for the point. I think I'm not really taking to that sort of person? My intended audience is the average American who views the USA as mostly a force for good, even if its foreign policy can be misguided at times.

Might be worth adding your blog post's subtitle or so, to hint at what Georgism is about (assuming I'm not an exception in not having known "Georgism" is the name for the idea of shifting taxation from labor etc. to natural resources).

Worth adding imho: Feels like a most natural way to do taxation in a world with jobs automated away.

-1[comment deleted]

Three related effects/terms:

1. Malthusian Trap as the maybe most famous example.

2. In energy/environment we tend to refer to such effects as

  • "rebound" when behavioral adjustment compensates part of the originally enable saving (energy consumption doesn't go down so much as better window insulation means people afford to keep the house warmer) and
  • "backfiring" when behavioral adjustment means we overcompensate (let's assume flights become very efficient, and everyone who today wouldn't have been flying because of cost or environmental conscience, starts to fl
... (read more)

No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.

Started trying it now; seems great so far. Update after 3 days: Super fast & easy. Recommend!

3MondSemmel
Glad to be of help!
Load More