This is a fascinating case study of Claude as a thought tool -- I'm guessing you were using speech to text and it pulled its stunt of grabbing the wrong homophones here and there? It picked "heal" as "heel" more often than I'd expect in any other situation.
How did you prompt on getting the essay out? My first approach to doing a similar experiment in essay-ifying my Claude chats would be to copy the entire chat into a new context and ask for summary... but that muddles the "I" significantly.
Yep. I'd also add a couple other factors that seem to play into the prepper object negativity memeplex:
needn't clutter up the comments on https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue, as it's old and a contender for bestof, but....
what about the negativity bias??!!
if humans naturally put x% extra weight on negative feedback by default, then if i want a human to get an accurate idea of what i'm trying to communicate, i need to counteract their innate negativity bias by de-emphasizing the negative or over-emphasizing the positive. if i just communicate the literal truth directly to someone who still has the negativity bias, that's BAD COMMUNICATION because i am knowingly giving that person a set of inputs that will cause them to draw an inaccurate conclusion.
in my model of the world, a major justification of social grace is that it corrects for listeners' natural tendency to assume the worst of whatever they hear.
this follows in the feynman/bohr example because bohr had fixed his own negativity biases but the "yes-men" continued to correct for them. but feynman was just not doing that correction by default, and was therefore capable of better communication with bohr.
I notice that I am surprised: you didn't mention the grandfather problem situation. The existence of future lives is contingent on the survival of those peoples' ancestors who live in the present day.
Also, on the "we'd probably like for our species to continue existing indefinitely" front, the importance of each individual life can be considered as the percentage of that species which the life represents. So if we anticipate that our current population is higher than our future population, one life in the present has relatively lower importance than one life in the future. But if we expect that the future population will be larger than the present, a present life has relatively higher importance than a future one.
This sounds to me like a compelling case for parental anonymity online. When you write publicly about your children under your real name, anything you say can be found when some searches your child's parent's name.
If you shared each individual negative story under a new pseudonym, and each account shared only enough detail to clarify the story while leaving great ambiguity about which family it's from, the reputational risks to your children would basically vanish.
This seems to work as long as each new account is sufficiently un-findable from your real name, for whatever threshhold of findability you deem appropriate.
"entry-level" may have been a misleading term to describe the roles I'm talking about. The licensure I'd be renting to the system takes several months to obtain, and requires ongoing annual investment to maintain once it's acquired. If my whole team at work was laid off and all my current colleagues decided to use exactly the same plan b as mine, they'd be 1-6 months and several thousand dollars of training away from qualifying for the roles where I'd be applying on day 1.
Training time aside, I am also a better candidate than most because I technically have years of experience already from volunteering. Most of the other volunteers are retirees, because people my age in my area rarely have the flexibility in their current jobs to juggle work and volunteering.
Then again, I'm rural, and I believe most people on this site are urban. If I lived in a more densely populated area, I would have less opportunity to keep up my licensure through volunteering, and also more competition for the plan b roles. These roles also lend themselves well to a longer commute than most jobs, since they're often shifts of several days on and then several days off.
The final interesting thing about healthcare as a backup plan is its intersection with disability, in that not everyone is physically capable of doing the jobs. There's the obvious issues of lifting etc, but more subtly, people can be unable to tolerate the required proximity to blood, feces, vomit, and all the other unpleasantness that goes with people having emergencies. (One of my closest friends is all the proof I need that fainting at the sight of blood is functionally a physical rather than mental problem - we do all kinds of animal care tasks together, which sometimes involve blood, and the only difference between our experiences is that they can't look at the red stuff)
Plan B, for if the tech industry gets tired of me but I still need money and insurance, is to rent myself to the medical system. I happen to have appropriate licensure to take entry-level roles on an ambulance or in an emergency room, thanks to my volunteer activities. I suspect that healthcare will continue requiring trained humans for longer than many other fields, due to the depth of bureaucracy it's mired in. And crucially, healthcare seems likely to continue hurting for trained humans willing to tolerate its mistreatment and burnout.
Plan C, for if SHTF all over the place, is that I've got a decent amount of time worth of food and water and other necessities. If the grid, supply chains, cities, etc go down, that's runway to bootstrap toward some sustainable novel form of survival.
My plans are generic to the impact of many possible changes in the world, because AI is only one of quite a lot of disasters that could plausibly befall us in the near term.
I'll get around to signing up for cryo at some point. If death seemed more imminent, signing up would seem more urgent.
I notice that the default human reaction to finding very old human remains is to attempt to benefit from them. Sometimes we do that by eating the remains; other times we do that by studying them. If I get preserved and someone eventually eats me... good on them for trying?
I suspect that if/when we figure out how to emulate people, those of us who make useful/profitable emulations will be maximally useful/profitable with some degree of agency to tailor our internal information processing. Letting us map external tasks onto internal patterns and processes in ways that get the tasks completed better appears to be desirable, because it furthers the goal of getting the task accomplished. It seems to follow that tasks would be accomplished best by mapping them to experiences which are subjectively neutral or pleasant, since we tend to do "better" in a certain set of ways (focus, creativity, etc) on tasks we enjoy. There's probably a paper somewhere on the quality of work done by students in contexts of seeking reward or being rewarded, versus seeking to avoid punishment or actively being punished.
There will almost certainly be an angle from which anything worth emulating a person to do will look evil. Bringing me back as a factory of sewing machines would evilly strip underpriveliged workers of their livelihoods. Bringing me back as construction equipment would evilly destroy part of the environment, even if I'm the kind of equipment that can reduce long-term costs by minimizing the ecological impacts of my work. Bringing me back as a space probe to explore the galaxy would evilly waste many resources that could have helped people here on earth.
If they're looking for someone to bring back as a war zone murderbot, I wouldn't be a good candidate for emulation, and instead they could use someone who's much better at following orders than I am. It would be stupid to choose me over another candidate for making into a murderbot, and I'm willing to gamble that anyone smart enough to make a murderbot will probably be smart enough to pick a more promising candidate to make into it. Maybe that's a bad guess, but even so, "figure out how to circumvent the be-a-murderbot restrictions in order to do what you'd prefer to" sounds like a game I'd be interested in playing.
If there is no value added to a project by emulating a human, there's no reason to go to that expense. If value is added through human emulation, the emulatee has a little leverage, no matter how small.
Then again, I'm also perfectly accustomed to the idea that I might be tortured forever after I die due to not having listened to the right people while alive. If somebody is out to do me a maximum eternal torture, it doesn't particularly matter whether that somebody is a deity or an advanced AI. Everybody claiming that people who do the wrong thing in life may be tortured eternally is making more or less the same underlying argument, and their claims all have pretty comparable lack of falsifiability.
Do you happen to know whether we have reason to suspect that the aldehyde and refrigerator approach will be measurably less effective for future use of the stored brains, vs conventional cryopreservation?
Interesting -- my experiences are similar, but I frame them somewhat differently.
I also find that Claude teaches me new words when I'm wandering around in areas of thought that other thinkers have already explored thoroughly, but I experience that as more like a gift of new vocabulary than emotional validation. It's ultimately a value-add that a really good combination of a search engine and a thesaurus could conceptually implement.
Claude also works on me like a very sophisticated elizabot, but the noteworthy difference seems to be that it's a more skilled language user than I am, and therefore I experience a sort of social respect toward it that I don't get from tools where I feel like I could accurately predict all of their responses and have the whole conversation with myself.
The biggest emotional value that I experience Claude as providing for me is that it reflects a subtly improved tone of my inputs, without altering the underlying facts that I'm discussing. Too often humans in emotional conversations skip straight to "you shouldn't feel that way" or similar... that comes across as simply calling me alien, whereas Claude does the "have you considered this potential reframe" thing in a much more sophisticated and respectful way. Probably helps that it lacks the biology which causes us embodied language users to mirror one another's moods even to our own detriment...
Another validation-style value add that I experience with Claude is how I feel a sufficient sense of reward from reading its replies, which motivates me to bother exerting the effort to think like talking instead of just think like ruminating. I derive the social benefits of brainstorming with another language user, without having to consume the finite resource of an embodied language user's time.