All of SurvivalBias's Comments + Replies

No, they are not. Animals can feel e.g. happiness as well.

Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we're pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard - meh, maybe but probably not. An insect, most people would say no. Ma... (read more)

>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.

You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I'm yet to hear anyone counting those things towards total utility calculations.

Of course a counterargument would be "but those are not actual goals, those are the goals of humans that set it", but in this case you've just hidden all the references to humans into the word "goal" and are back to square 1.

So utility theory is a useful tool, but as far as I understand it's not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about "maximizing utility" as the end in and of itself all the time. It was in this latter sense that I was asking.

2Viktor Rehnberg
Perhaps for most they don't have this in the back of their mind when they think of utility. But, for me this is what I'm thinking about. The aggregation is still confusing to me, but as a simple case example. If I want to maximise total utility and am in a situation that only impacts a single entity then increasing utility is the same to me as getting this entity in for them more preferable states.

To start off, I don't see much point in formally betting $20 on an event conditioned on something I assign <<50% probability of happening within the next 30 years (powerful AI is launched and failed catastrophically and we're both still alive to settle the bet and there was an unambiguous attribution of the failure to the AI). I mean sure, I can accept the bet, but largely because I don't believe it matters one way or another, so I don't think it counts from the epistemological virtue standpoint.

But I can state what I'd disagree with in your terms if... (read more)

What Steven Byrnes said, but also my reading is that 1) in the current paradigm it's near-damn-impossible to built such an AI without creating an unaligned AI in the process (how else do you gradient-descend your way into a book on aligned AIs?) and 2) if you do make an unaligned AI powerful enough to write such a textbook, it'll probably proceed to converting the entire mass of the universe into textbooks, or do something similarly incompatible with human life.

It might, given some luck and that all the pro-safety actors play their cards right. Assuming by "all labs" you mean "all labs developing AIs at or near to then-current limit of computational power", or something along those lines, and by "research" you mean "practical research", i.e. training and running models. The model I have in mind not that everyone involved will intellectually agree that such research should be stopped, but that enough percentage of public and governments will get scared and exert pressure on the labs. Consider how most of the world... (read more)

1Yonatan Cale
I'd be pretty happy to bet on this and then keep discussing it, wdyt? :) Here are my suggested terms: 1. All major AI research labs that we know about (deep mind, openai, facebook research, china, perhaps a few more*) 2. Stop "research that would advance AGI" for 1 month, defined not as "practical research" but as "research that will be useful for AGI coming sooner". So for example if they stopped only half of their "useful to AGI" research, but they did it for 3 months, you win. If they stopped training models but keep doing the stuff that is the 90% bottleneck (which some might call "theoretical"), I win 3. *You judge all these parameters yourself however you feel like 1. I'm just assuming you agree that the labs mentioned above are currently going towards AGI, at least for the purposes of this bet. If you believe something like "openai (and the other labs) didn't change anything about their research but hey, they weren't doing any relevant research in the first place", then say so now 2. I might try to convince you to change your mind, or ask others to comment here, but you have the final say 3. Regarding "the catastrophe was unambiguously attributed to the AI" - I ask that you judge if it was unambiguously because AI, and that you don't rely on public discourse, since the public can't seem to unambiguously agree on anything (like even vaccines being useful). I suggest we bet $20 or so mainly "for fun" What do you think?

The important difference is that the nuclear weapons are destructive because they worked exactly as intended, and the AI in this scenario is destructive because it failed horrendously. Plus, the concept of rogue AI has been firmly ingrained into public consciousness by now, afaik not the case with the extremely destructive weapons in 1940s [1]. So hopefully this will produce more public outrage (and scare among the elites themselves) => stricter external and internal limitations on all agents developing AIs. But in the end I agree, it'll only buy t... (read more)

How possible is it that a misaligned, narrowly-superhuman AI is launched, fails catastrophically with casualties in the 10^4 - 10^9 range, and the [remainder of] humanity is "scared straight" and from that moment onward treats the AI technology the way we treat nuclear technology now - i.e. effectively strangles it into stagnation with regulations - or even more conservatively? From my naive perspective it is somewhat plausible politically, based on the only example of ~world-destroying technology that we have today. And this list of arguments doesn't seem... (read more)

1Yonatan Cale
I'm interested in getting predictions for whether such an event would get all (known) labs to stop research for even one month (not counting things like "the internet is down so we literally can't continue"). I expect it won't. You?
2Mitchell_Porter
I'm sure there are circumstances under which a "rogue AI" does something very scary, and leads to a very serious attempt to regulate AI worldwide, e.g. with coordination at the level of UN Security Council. The obvious analogy once again concerns nuclear weapons; proliferation in the 1960s led to the creation of the NNPT, the Nuclear Nonproliferation Treaty. Signatories agree that only the UNSC permanent members are allowed to have nuclear weapons, and in return the permanent members agree to help other signatories develop nonmilitary uses of nuclear power. The treaty definitely helped to curb proliferation, but it's far from perfect. The official nuclear weapons states are surely willing to bend the rules and assist allies to obtain weapons capability, if it is strategically desirable and can be done deniably; and not every country signed the treaty and now some of those states (e.g. India, Pakistan) are nuclear weapons states.  Part of the NNPT regime is the IAEA, the International Atomic Energy Agency. These are the people who, for example, carry out inspections in Iran. Again, the system has all kinds of troubles, it's surrounded by spy plots and counterplots, many nations would like to see Security Council reformed so the five victorious allies from World War 2 (US, UK, France, Russia, China) don't have all the power, but still, something like this might buy a little time.  If we follow the blueprint that was adopted to fight nuclear proliferation, the five permanent members would be in charge, and they would insist that potentially dangerous AI activities in every country take place under some form of severe surveillance by an International Artificial Intelligence Agency, while promising to also share the benefits of safe AI with all nations. Despite all the foreseeable problems, something like this could buy time, but all the big powers would undoubtedly keep pursuing AI, in secret government programs or in open collaborations with civilian industry and aca

Yes and no. 1-6 are obviously necessary but not sufficient - there's much more to diet and exercise than "not too much" and "some" respectively. 7 and 8 are kinda minor and of dubious utility except for in some narrow circumstances so whatever. And 9 and 10 are hotly debated and that's exactly what you'd need rationality for, as well as figuring out the right pattern of diet and exercise. And I mean right for each individual person, not in general, and the same with supplements - a 60-year old should have much higher tolerance for potential risks of a longevity treatment than a 25yo, since the latter has more less to gain and more to loose.

I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don't think there's any significant number of people dying from fascia stiffness? That's one of the main ideas behind the hallmarks of aging, that you don't have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).

You're fighting a strawman (nobody's going to deny death to anyone, and except for seriously ill most people who truly want to die now have an option to do so; myself I'm actually pro-euthanasia). And, once again, you want to inflict on literally everyone a fate you say you don't want for yourself. Also, I don't accept the premise there's any innate power balance in the universe that we ought to uphold even at the cost of our lives, we do not inhabit a Marvel movie. And you're assuming the knowledge which you can't possibly have, about exactly how human consciousness functions and what alterations to it we'll be able to make in the next centuries or millennia.

0superads91
"you're assuming the knowledge which you can't possibly have" Naturally, I can't predict the future (unfortunately). But neither can you: "nobody's going to deny death to anyone" You're making just as much assumptions as myself. The only difference is that you want to spin the heaven/hell wheel of fortune (this is a metaphor), while I don't - at least not until we've had a hell of a lot more time to study it (aka no immortality in a foreseeable future).

That's, like, 99.95% probability, one in two thousand chances. You'd have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I'm not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you're not an expert in fail and every single one of hundreds attempts in another technology you're not an expert in fail (building aligned AGI).

I don't believe there are an

... (read more)
2Dagon
I think we've found at least one important crux, I'm going to bow out now.  I realize I misspoke earlier - I don't much care if I become convinced, but I very much hope you succeed in keeping me and you and others alive much longer.  

Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it's helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?

To be clear, I'm not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.

1Dagon
I'd actually like to be convinced, but I suspect our priors differ by enough that it's unlikely.   I currently assign less than a 0.05% that I'll live another 50 years (which would put me over 100), and three orders of magnitude less likely that I'll live to 300.  These are small enough that I don't have as much precision in my beliefs as that implies, of course. Conditional on significant lifestyle changes, I can probably raise those chances by 10x, from vanishingly unlikely to ... vanishingly unlikely.  Conditional on more money than I'm likely to have (which is already in the top few percent of humanity), maybe another 3x.   I don't believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.

but that's not anywhere near solving it in principle

Of course they are not, that's not the point. The point is that they can add more time for us to discover more cures - to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.

but I think it's more likely for bio-brains to continue dy

... (read more)
3Dagon
This is an obvious and common enough analogy that you don't need to frame it as a thought experiment.  I understand that I have an incurable disease.  It's longer than 3 years, I hope, but not by much more than an order of magnitude, certainly nowhere near 2.  I'm not even doing everything I could in terms of lifestyle, exercise, and nutrition to extend it, let alone "experimental" cures.   It's not infectious, fortunately - everyone already has it.   Friends I've lost to disease, accident, or suicide ALSO didn't universally commit to "experimental cures" - in all cases I know of, the cost (non-monetary cost of side-effects more than pure money, but some of that too) of the long-shots were higher than their perceived success rate.   As Pascal's Wager options go, giving up significant resources or happiness over the next decade for a VERY TINY chance of living longer, seems to be among the less compelling formulations.

Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years. And all that only to find out it was a half-assed hypothetical.

1Martin Randall
I would prefer to have this conversation without the sarcasm. Maybe I encouraged it with my "half-assed hypothetical". If so, please consider this an attempt to reset the tone. I read this as a claim that it is impossible for the elimination of death, aging, or mortality to be dangerous because it can only decrease the danger of dying. I replied by pointing out that there are other dangers, such as the danger of astronomical waste. Another danger is suffering risk. The story in Surface Detail points in that direction. If I misread you then you were probably saying something I agree with. I read this as a statement that you aren't concerned about astronomical waste. That's a completely reasonable response, many philosophers agree with you.

Just a reminder, in this argument we are not the modern people who get to feel all moral and righteous about themselves, we are the Greeks. Do you really want to die for some hypothetical moral improvement of future generations? If so, you can go ahead and be my guest, but myself I'd very much rather not to.

-3superads91
Like the popular saying goes, you either die a hero, or live long enough to become a villain. We are flawed beings, and unfortunately (yes, unfortunately, I would like to live forever as well (I mean, at least my present self, I'm pretty sure after a couple centuries I'd have gone insane even with all the memory-editing and cell-rejuvenating tech you can imagine (maybe that would extend it to a few millenia))) death is a necessary balancer of power. So, no, I don't wanna die for future generations, but I better do someday. Personality needs coherence, that's why we're advert to change (some more, some less). That's why new beings are important to keep the power balance, if there is even any balance in this chaotic world. One way to accept death is simply thinking how bad things could get beyond this current unusual normalcy (which won't last long). Cancer patients want to die. Slaves want to die. Imagine denying death to those least fortunate. That would be way worse than mortality. (And yes, you could probably cure cancer or any pain in a world with immortality, but the problem are the slaves, of those denied the treatment... i.e., the problem is the tyranny, which would be greatly amplified in a deathless world, and being naive to the point of not considering it.)

Hmm that's interesting, I need to find those people.

There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point. 

True but there's also plenty of people who think otherwise, other comments being an example.

I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on th... (read more)

2ChristianKl
I don't think there's a good reason to make that assumption. There are various factors that lead to fascia getting tense. Substances like fibrin keep the fascia contracted and don't get automatically cleared.  SENS is certainly not the only organization and there are plenty of people who don't believe that aging is as easy as just curing the hallmarks. 

I personally believe exactly the right kind of advocacy may be extremely effective, but that's really a story for a post. Otherwise yeah, AGI is probably higher impact for those who can and want to work there. However, in my observation the majority of rationalists do not in fact work in AGI, and imao life extension and adjacent areas have a much wider range of opportunities and so could be a good fit for many of those people.

The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.

 

Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?

There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of ... (read more)

3Dagon
I care about my friends and loved ones.  I even care about strangers.  I'm a fan of life extension research.   But I'm not dedicating much of my resources to it - in the big picture, one human's about as good as another, and in the small picture I don't expect to have much chance of success, and don't want to reduce my enjoyment of my remaining time on a crazy longshot. I have to say that neither of those trials look particularly promising on the "ending aging" front.  They may slightly delay some problems (and that's GREAT - living longer is, in fact, better), but that's not anywhere near solving it in principle.  Mind upload might be a solution eventually, but I think it's more likely for bio-brains to continue dying and the immortal are digital from birth.  

Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.

If you're say 60+ than yes, anti-agin... (read more)

1Martin Randall
Yes, there are a number of interventions available that could delay death by a few years. For example, my copy of "Ageless: The New Science of Getting Older Without Getting Old", which is almost a year old, ends with a short list: 1. Don't smoke 2. Don't eat too much 3. Get some Exercise 4. Get seven to eight hours of sleep 5. Get vaccinated and wash your hands 6. Take care of your teeth 7. Wear sunscreen 8. Monitor your heart rate and blood pressure 9. Don't bother with supplements 10. Don't bother with longevity drugs yet 11. Be a woman. Do these count? When you say "relatively few people [are] doing the available interventions on the personal level" are these the interventions you're talking about?

My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]

But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?

8Matthew Barnett
I think of myself as having two parts to my utility function (really just, what I care about). There's a selfish part, and a non-selfish part. As for the selfish component, I'm happy to pursue personal strategies to delay my aging and death. Indeed, I feel that my personal life extension strategies are extreme even by the standards of conventional life extension enthusiasts. I don't see a compelling selfish reason to donate to or work for life extension organizations. Even if I was a highly skilled biologist (and I'm not), the number of hours or days I could realistically hope to hasten the end of aging would be a low number. In that amount of time, I could have pursued better strategies aimed at helping myself alone.  While delaying death by one day gives a combined sum of millions of years of extra life across everyone, to me it's just one day. That's hardly worth switching careers over. On the other hand, the non-selfish part of my utility function prefers to do what's best for the world generally, and I don't find life extension research particularly competitive across this axis. In the past, I've contemplated volunteering to help life extension advocacy, but it was more of a personal emotional thing than what I thought would actually be effective. I have considered whether life extension could turn out to be extremely important for non-selfish reasons in this post. Ultimately, I do not find the arguments very compelling. Not only am I skeptical that life extension is coming any time soon, but I suspect that by the time it arrives, something even more important (such as AGI) will be here already.

Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as "much impact for many people" on my book.

 

But also, what's the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it's a hotly debated topic so asking for your personal best estimate.

3Ben Pace
Sure, it's a lot compared to most activities, but it's not a lot compared to the total people who could live in the future lightcone. You have to be clear what you're comparing to when you say something is large. My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I'd be more surprised if it didn't happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there's a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I'd have to think more to try to pin it down more.

Mortality is thought about by everyone, forever.

Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. ... (read more)

2Dustin
  I lean towards little attention and it is not justified, but I'm really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case. I can imagine plausible arguments for each of the options you give (and more) and I'm not entirely convinced by any of them. 

I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?

2[anonymous]
It is one of the most common charities donated to by effective altruists here. But what I'm also saying is that many of the people working at SENS have had some level of exposure to the less wrong / rationalist community.

So your argument is that people should die for their own good, despite what they think about it themselves? Probably not since it'd be a almost a caricature villain, but I don't see where else are you going with this. And the goal of "not developing an excruciatingly painful chronic disease" is not exactly at odds with the goal "combat aging".

>By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension

Ok that's honestly good enough for me, I say lets get there and then argue whether we need more extension.

I'm no therapist and not even good as a regular human being at talking about carrying burdens that make one to want to kill themselves eventually, you should probably seek advice of someone who can do a better job at it.

Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.

With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you kn... (read more)

Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...

1Martin Randall
So many answers here. For example: maybe without death, reproduction rates fall off a cliff, society ages, culture shifts from "explore" to "exploit", we never leave Earth, we waste the vast majority of our potential as a species. Later, our sun dies, everyone survives the experience, we realize that we're in a philosophical hypothetical and the thought experiment ends in bathos.

>When one realizes how far life is from the rosy picture that is often painted, one has a much easier time accepting death, even while still fearing it or still wanting to live as long as possible.

Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it.

If it's just a comforting lie you believe in believing to make the thought of death more tolerable, well, I can understand that, death really is terrifying, but then consider maybe not to use it as an argument.

0superads91
"Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it." Nah, I've been lucky myself. But this isn't about myself or any individual, but life is general. I keep saying this: people today live in this rare modern oasis of comfort, which gives them these naive perspectives. Until they develop an excruciatingly painful chronic disease at least (and I mean, don't even need to mention the massive modern dark clouds over our heads that anyone in this forum should know about).
1dashdashdot
I see this argument more often but I don't think it's always fear of death that is the driving force of not wanting to live forever. Can you tell me if there's something wrong with the following metaphor: I immensely enjoy a mountain hiking trip but after a few weeks it needs to end because my body is aching and even the beauty of the mountains becomes mundane. Isn't life somehow the same way? Some burdens seem only carry-able because they're temporary and some beauty is only (or more) beautiful because it's fleeting. (By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension)

>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.

That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI i... (read more)

3Ben Pace
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
4[anonymous]
I'll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.

I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.

I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).

 

>low hanging fruit might be picked WRT mortality

I'm doubtful, but I can certainly see a strong argument for this! However my point i... (read more)

2Dustin
  Well, I'm not arguing in bad faith.  In fact, I'm almost not arguing at all!  If your premise is correct, I think it's a very good question to ask!  To the extent I am arguing it's with the assumption behind the premise.  To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk.  At least not so readily apparent that it can just be glossed over. To be clear, here I'm not actually making the low-hanging fruit argument. I'm just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don't even acknowledge that AI risk is a thing.  Mortality is thought about by everyone, forever.  It's almost as if mortality risk concern is a different reference class than AI risk concern. I think if you were to summarize my objection to just glossing over the premise of your question it's that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question.  If you are correct, I think it's very important, but it's not obvious to me that you are correct.  If you are correct, I think it's really important to make that argument rather than glossing it over. I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc.  I don't think that means I'm less concerned about the physical security of my home relative to my physical appearance! Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.     Anyway, I also think it's likely that the questions I'd want answered are so adjacent to the question you want ans

Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?

 

And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with... (read more)

I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?

To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present... (read more)

3[anonymous]
Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.

Tags on LW: Longevity, Aging

The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent.  At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly. 

It is not AI-level attention, but it is much more than is given to Ukraine. 

5Dustin
I'm not arguing that you're wrong I'm just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking "why" before answering "if". I'm also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty. It might be that you're right but if I were you I'd like to determine that first.
3Richard_Kennaway
It should be this.

As someone who is very much in favor of anti-aging, I'd answer to it something like this: "I'm fine with you entertaining all these philosophical arguments, and if you like them so much you literally want to die for them, by all means. But please don't insist that me and everyone I care or will care about should also die for your philosophical arguments."

we're perceiving things as "qualities", as "feels", even though all we are really perceiving is data

I consider it my success as a reductionist that this phrase genuinely does not make any sense to me.

But he says he doesn't think the word "illusion" is a helpful word for expressing this, and illusionism should have been called something else, and I think he's probably right.

Yep, can't agree more, basically that's why I was asking - "illusion" doesn't sound like the right concept here.

-1TAG
An illusion is a misleading appearance , whereas a delusion is a false belief. Illusionists don't believe there are any appearances. They think that everything is a belief , but some beliefs are accompanied by a false meta belief that they are not beliefs. So illusionism would be more literally described as delusionism.
3VirtuaLyric
A friend of mine has not-quite-complete-aphantasia, meaning he cannot "see" visual images in his "mind's eye", except for some rare occasions. When he remembers things he saw, or imagines what hypothetical things might look like, he almost never has mental imagery. But he can still recall information about what he saw, including stuff like "how many items were there", even though he did not consciously count the items when he was looking at them, and is only counting them by examining the memory when asked. I asked him how he was doing it, and he said it was like examining concepts about what he saw, and somehow using those concepts to figure out how many he remembers seeing. For me to do the same thing, I would have to bring a mental image from my memory into my "mind's eye", and count how many items I "see". But maybe this mental imagery of mine is just something I'm convinced I experience, rather than something I actually experience? Maybe having aphantasia is nothing more than lacking a certain delusion when it comes to remembered or imagined sights? After all, I haven't found anything that my mental images can figure out, which his concepts cannot. Or vice versa. In any case, aphantasia only applies to mental images. When my friend is actually looking at something with his eyes (not remembering or imagining), then just like me, he's convinced that what he's experiencing is something more than just concepts.

Those are all great points. Regarding your first question, no, that's not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There's a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind o... (read more)

-1TAG
That's nothing new, it's the intuition that the Mary thought experiment is designed to address.
1Signer
1. To me it looks like the defining feature of consciousness intuition is one's certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing "cogito ergo sum". 2. I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don't remember seeing something recent it means your perception wasn't conscious, but you certainly forgot some conscious moments in the past. Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts. At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system. 3. I don't know. It's all ethics, so I'll probably just check for some arbitrary similarity-to-human-mind metric. Depending on detailed definitions of "reflect on itself" and "model itself perceiving" I think you can make an agent that wouldn't claim to be perfectly certain in its own consciousness. For example, I don't see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.

Ah, I see. My take on this question would be that we should focus on the word "you" rather than "qualia". If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you'd say "it felt like something". If and only if somehow you didn't even feel the needle piercing your skin, you'll say "I didn't feel anything". There were experiments proving that people can react to a stimulus they are not s... (read more)

2Signer
What's your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be 1. You think (and say) you have consciousness. 2. When you examine why you think it, you use your ability to perceive yourself as human mind. 3. Therefore consciousness is your ability to perceive yourself as human mind. You are basically saying that consciousness detector in the brain is an "algorithm of awareness" detector (and algorithm of awareness can work as "algorithm of awareness" detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say "no, my definition of consciousness is not about awareness". And because it doesn't automatically fits into "If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something" if you just replace "conscious" by "able to percieve itself".

Your "definition" (which really isn't a definition but just three examples) have almost no implications at all, that's my only issue with it.

-1TAG
That's a feature, since it begs the minimal number of questions.

I don't think qualia - to the degree it is at all a useful term - has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I'd understand by "feelings"), more specifically that the perceptions can generate qualia sometimes in some creatures but they don't automatically do.

Otherwise you'd have to argue one of the two:

  1. Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia
... (read more)
5TAG
Of course, the minimal definition of "qualia" I have been using doesn't have that implication.
1Signer
Ok, by these definitions what I was saying is "why not having ability to do recursion stops you from having pain-qualia?". Just feeling like there is a core of truth to qualia ("conceivability" in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings - why recursively self-modeling is not just another kind of reaction and perception?

Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent "no you're wrong" without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don't see much sense in continuing.

5TAG
We're mostly arguing about the definition of qualia. I've quoted Wikipedia , you haven't quoted anybody.

Sure, I wasn't claiming at any point to provide a precise mathematical model let alone implementation, if that's what you're talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you'd want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade - the level capable of modeling itself to a degree - and wh... (read more)

3TAG
Only you define "quale" in terms of experiencing versus not experiencing.

Replacing it with another word of which you then use identically isn't the same as tabooing, that's kind of defeats the purpose.

there can still be agreement that they in some sense about sensory qualities.

There may be, but then it seems there's no agreement about what sensory qualities are.

I've said s already, haven't I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can't.

No, you have not, in fact in all your comments you haven't mentioned "predict" or "mary" or "brain" ever once. But now ... (read more)

5TAG
They're things like the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. ” I don't believe that's difficult to understand. How do you tell that a putative explanation can predict something? You make a theoretical prediction, and you perform an experiment to confirm it. Otherwise, non -predictiveness is the default. So, a solution to the HP needs to be able make a theoretical prediction: there needs to be some gizmo were you input a brain state and get a predicted quale as output.

Yeah, although seems only in the sense where "everything [we perceive] is illusion"? Which is not functionally different from "nothing is illusion". Unless I'm missing something?

3VirtuaLyric
Illusionist theories of consciousness say that people do not actually ever experience phenomenal qualities; we just have the very deep delusion that phenomenal qualities are a thing. Whether it's a hallucination or not doesn't matter. Either way, our delusion tells us that we're perceiving things as "qualities", as "feels", even though all we are really perceiving is data. If I'm looking at something, then I am acquiring data that tells me, for instance, that there's a certain dark red shape at a certain location in my visual field, and a different reddish-orange shape near it, and whatever else. I'm acquiring the knowledge, the concept, of certain colors and brightnesses being in certain locations in my visual field. But I mistakenly think I am experiencing a dark red quality, and a reddish-orange quality, and the qualities of bright and dark and in-between, etc. I think I'm getting my visual input in the form of a subjective experience. This paper that I'm linking clears up several important points about the Attention Schema Theory. Graziano admits that "Illusionist theories emphasize how subjective awareness does not really exist – the brain tricks itself into thinking it has it. Obviously, AST aligns with that perspective." But he says he doesn't think the word "illusion" is a helpful word for expressing this, and illusionism should have been called something else, and I think he's probably right. (It seems like it's too late to change it now, though.) https://sci-hub.se/10.1080/02643294.2020.1761782

Yeah that sounds reasonable and in line with my intuitions. Where by "somebody" I would mean consciousness - the mind modeling itself. The difference between "qualia" and "no qualia" would be the difference between the signal of e.g. pain propagating all the way to the topmost, conscious level, which would predict not just receiving the signal (as all layers below also do), but also predict its own state altered by receiving the signal.  In the latter case, the reason why the mind knows there's "somebody" experiencing it, is because it observes (=pred... (read more)

4Signer
I believe it depends on one's preferences. Wait, you think it doesn't? By "ability to do recursion" I meant "ability to predict its own state altered by receiving the signal" or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn't implement it doesn't have qualia therefore doesn't feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be "why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn't propagate to the top level".

I'm not trying to pull the subject towards anything, I'm just genuinely trying to understand your position, and I'd appreciate a little bit of cooperation on your part in this. Such as, answering any of the questions I asked. And "I don't know" is a perfectly valid answer, I have no intention to "gotcha" you or anything like this, and by your own admission the problem is hard. So I'd ask you to not interpret any of my words above or below as an attack, quite the opposite I'm doing my best to see your point.

You should be using the famous hardness of the HP

... (read more)
5TAG
I'm not saying that. But answeers to questions should relevant. I've already done that. I can replace "qualia" with *sensory qualities", and point out that you are not solving the hard problem because you are no explaining sensory qualities. Theres no real contradiction. Even though there is disagreement about some features of qualia ,there can still be agreement that they in some sense about sensory qualities. I used a simple, almost naive , definition , consisting of a few examples, for a reason. I've said s already, haven't I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can't.

The part that you quoted doesn't define anything, it's just 3 examples, which together may be just as well defined simply as "sensations". And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I'd called rigorous, plus a number of references to qualia proponents who claim that this or that part of some definition is wrong (e.g. Ramachandran and Hirstein say that qualia could be communicated), plus a list of qualia opponents who have significant issues with the whole concept. That is exactly what ... (read more)

5TAG
No, they are about the quality of sensations. You keep trying to pull the subject towards "explaining sensation" because you actually can explain sensation, absent the qualities of sensation. But if the HP were really about explaining sensation in that way it wouldn't be hard. You should be using the famous hardness of the HP as a guide to understanding it ... If it seems easy , you've got it wrong. But that might be an isolated demand for rigour. FYI, there is no precise and universally accepted definition of "matter". Note that not everything that is true of qualia (or anything else) needs to be in the definition. I didn't say that. I'm not using an idiosyncratic defintion. I would not expect a definition alone to answer every possible question. I once read a paper arguing that unseen qualia are a coherent idea, but I forget the details.

Thanks a lot for the links! I didn't look into them yet, but the second quote sounds pretty much exactly like what I was trying to say, only expressed more intelligibly. Guess the broad concept is "in the air" enough that even a layman can grope their way to it.

Load More