>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.
You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I'm yet to hear anyone counting those things towards total utility calculations.
Of course a counterargument would be "but those are not actual goals, those are the goals of humans that set it", but in this case you've just hidden all the references to humans into the word "goal" and are back to square 1.
So utility theory is a useful tool, but as far as I understand it's not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about "maximizing utility" as the end in and of itself all the time. It was in this latter sense that I was asking.
To start off, I don't see much point in formally betting $20 on an event conditioned on something I assign <<50% probability of happening within the next 30 years (powerful AI is launched and failed catastrophically and we're both still alive to settle the bet and there was an unambiguous attribution of the failure to the AI). I mean sure, I can accept the bet, but largely because I don't believe it matters one way or another, so I don't think it counts from the epistemological virtue standpoint.
But I can state what I'd disagree with in your terms if...
What Steven Byrnes said, but also my reading is that 1) in the current paradigm it's near-damn-impossible to built such an AI without creating an unaligned AI in the process (how else do you gradient-descend your way into a book on aligned AIs?) and 2) if you do make an unaligned AI powerful enough to write such a textbook, it'll probably proceed to converting the entire mass of the universe into textbooks, or do something similarly incompatible with human life.
It might, given some luck and that all the pro-safety actors play their cards right. Assuming by "all labs" you mean "all labs developing AIs at or near to then-current limit of computational power", or something along those lines, and by "research" you mean "practical research", i.e. training and running models. The model I have in mind not that everyone involved will intellectually agree that such research should be stopped, but that enough percentage of public and governments will get scared and exert pressure on the labs. Consider how most of the world...
The important difference is that the nuclear weapons are destructive because they worked exactly as intended, and the AI in this scenario is destructive because it failed horrendously. Plus, the concept of rogue AI has been firmly ingrained into public consciousness by now, afaik not the case with the extremely destructive weapons in 1940s [1]. So hopefully this will produce more public outrage (and scare among the elites themselves) => stricter external and internal limitations on all agents developing AIs. But in the end I agree, it'll only buy t...
How possible is it that a misaligned, narrowly-superhuman AI is launched, fails catastrophically with casualties in the 10^4 - 10^9 range, and the [remainder of] humanity is "scared straight" and from that moment onward treats the AI technology the way we treat nuclear technology now - i.e. effectively strangles it into stagnation with regulations - or even more conservatively? From my naive perspective it is somewhat plausible politically, based on the only example of ~world-destroying technology that we have today. And this list of arguments doesn't seem...
Yes and no. 1-6 are obviously necessary but not sufficient - there's much more to diet and exercise than "not too much" and "some" respectively. 7 and 8 are kinda minor and of dubious utility except for in some narrow circumstances so whatever. And 9 and 10 are hotly debated and that's exactly what you'd need rationality for, as well as figuring out the right pattern of diet and exercise. And I mean right for each individual person, not in general, and the same with supplements - a 60-year old should have much higher tolerance for potential risks of a longevity treatment than a 25yo, since the latter has more less to gain and more to loose.
I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don't think there's any significant number of people dying from fascia stiffness? That's one of the main ideas behind the hallmarks of aging, that you don't have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).
You're fighting a strawman (nobody's going to deny death to anyone, and except for seriously ill most people who truly want to die now have an option to do so; myself I'm actually pro-euthanasia). And, once again, you want to inflict on literally everyone a fate you say you don't want for yourself. Also, I don't accept the premise there's any innate power balance in the universe that we ought to uphold even at the cost of our lives, we do not inhabit a Marvel movie. And you're assuming the knowledge which you can't possibly have, about exactly how human consciousness functions and what alterations to it we'll be able to make in the next centuries or millennia.
That's, like, 99.95% probability, one in two thousand chances. You'd have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I'm not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you're not an expert in fail and every single one of hundreds attempts in another technology you're not an expert in fail (building aligned AGI).
...I don't believe there are an
Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it's helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?
To be clear, I'm not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.
but that's not anywhere near solving it in principle
Of course they are not, that's not the point. The point is that they can add more time for us to discover more cures - to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.
...but I think it's more likely for bio-brains to continue dy
Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years. And all that only to find out it was a half-assed hypothetical.
Just a reminder, in this argument we are not the modern people who get to feel all moral and righteous about themselves, we are the Greeks. Do you really want to die for some hypothetical moral improvement of future generations? If so, you can go ahead and be my guest, but myself I'd very much rather not to.
Hmm that's interesting, I need to find those people.
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
True but there's also plenty of people who think otherwise, other comments being an example.
I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on th...
I personally believe exactly the right kind of advocacy may be extremely effective, but that's really a story for a post. Otherwise yeah, AGI is probably higher impact for those who can and want to work there. However, in my observation the majority of rationalists do not in fact work in AGI, and imao life extension and adjacent areas have a much wider range of opportunities and so could be a good fit for many of those people.
The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.
Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?
There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of ...
Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.
If you're say 60+ than yes, anti-agin...
My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]
But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as "much impact for many people" on my book.
But also, what's the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it's a hotly debated topic so asking for your personal best estimate.
Mortality is thought about by everyone, forever.
Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. ...
I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?
So your argument is that people should die for their own good, despite what they think about it themselves? Probably not since it'd be a almost a caricature villain, but I don't see where else are you going with this. And the goal of "not developing an excruciatingly painful chronic disease" is not exactly at odds with the goal "combat aging".
>By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension
Ok that's honestly good enough for me, I say lets get there and then argue whether we need more extension.
I'm no therapist and not even good as a regular human being at talking about carrying burdens that make one to want to kill themselves eventually, you should probably seek advice of someone who can do a better job at it.
Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.
With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you kn...
Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...
>When one realizes how far life is from the rosy picture that is often painted, one has a much easier time accepting death, even while still fearing it or still wanting to live as long as possible.
Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it.
If it's just a comforting lie you believe in believing to make the thought of death more tolerable, well, I can understand that, death really is terrifying, but then consider maybe not to use it as an argument.
>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI i...
I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.
I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I'm doubtful, but I can certainly see a strong argument for this! However my point i...
Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?
And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with...
I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?
To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present...
The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent. At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly.
It is not AI-level attention, but it is much more than is given to Ukraine.
Meetup link seems to be broken
As someone who is very much in favor of anti-aging, I'd answer to it something like this: "I'm fine with you entertaining all these philosophical arguments, and if you like them so much you literally want to die for them, by all means. But please don't insist that me and everyone I care or will care about should also die for your philosophical arguments."
we're perceiving things as "qualities", as "feels", even though all we are really perceiving is data
I consider it my success as a reductionist that this phrase genuinely does not make any sense to me.
But he says he doesn't think the word "illusion" is a helpful word for expressing this, and illusionism should have been called something else, and I think he's probably right.
Yep, can't agree more, basically that's why I was asking - "illusion" doesn't sound like the right concept here.
Those are all great points. Regarding your first question, no, that's not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There's a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind o...
Ah, I see. My take on this question would be that we should focus on the word "you" rather than "qualia". If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you'd say "it felt like something". If and only if somehow you didn't even feel the needle piercing your skin, you'll say "I didn't feel anything". There were experiments proving that people can react to a stimulus they are not s...
Your "definition" (which really isn't a definition but just three examples) have almost no implications at all, that's my only issue with it.
I don't think qualia - to the degree it is at all a useful term - has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I'd understand by "feelings"), more specifically that the perceptions can generate qualia sometimes in some creatures but they don't automatically do.
Otherwise you'd have to argue one of the two:
Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent "no you're wrong" without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don't see much sense in continuing.
Sure, I wasn't claiming at any point to provide a precise mathematical model let alone implementation, if that's what you're talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you'd want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade - the level capable of modeling itself to a degree - and wh...
Replacing it with another word of which you then use identically isn't the same as tabooing, that's kind of defeats the purpose.
there can still be agreement that they in some sense about sensory qualities.
There may be, but then it seems there's no agreement about what sensory qualities are.
I've said s already, haven't I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can't.
No, you have not, in fact in all your comments you haven't mentioned "predict" or "mary" or "brain" ever once. But now ...
Yeah, although seems only in the sense where "everything [we perceive] is illusion"? Which is not functionally different from "nothing is illusion". Unless I'm missing something?
Yeah that sounds reasonable and in line with my intuitions. Where by "somebody" I would mean consciousness - the mind modeling itself. The difference between "qualia" and "no qualia" would be the difference between the signal of e.g. pain propagating all the way to the topmost, conscious level, which would predict not just receiving the signal (as all layers below also do), but also predict its own state altered by receiving the signal. In the latter case, the reason why the mind knows there's "somebody" experiencing it, is because it observes (=pred...
I'm not trying to pull the subject towards anything, I'm just genuinely trying to understand your position, and I'd appreciate a little bit of cooperation on your part in this. Such as, answering any of the questions I asked. And "I don't know" is a perfectly valid answer, I have no intention to "gotcha" you or anything like this, and by your own admission the problem is hard. So I'd ask you to not interpret any of my words above or below as an attack, quite the opposite I'm doing my best to see your point.
...You should be using the famous hardness of the HP
The part that you quoted doesn't define anything, it's just 3 examples, which together may be just as well defined simply as "sensations". And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I'd called rigorous, plus a number of references to qualia proponents who claim that this or that part of some definition is wrong (e.g. Ramachandran and Hirstein say that qualia could be communicated), plus a list of qualia opponents who have significant issues with the whole concept. That is exactly what ...
Thanks a lot for the links! I didn't look into them yet, but the second quote sounds pretty much exactly like what I was trying to say, only expressed more intelligibly. Guess the broad concept is "in the air" enough that even a layman can grope their way to it.
Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we're pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard - meh, maybe but probably not. An insect, most people would say no. Ma... (read more)