All of tivelen's Comments + Replies

A vaccination requirement could result in lower apparent effectiveness; so could risk compensation. In order to determine how much risk compensation occurred, we have to determine how much the vaccination requirement lowered the effectiveness. Without that analysis, concluding that risk compensation has a big enough effect to cause or contribute significantly to negative effectiveness is premature.

I am otherwise unsure of what you are trying to get at. The unvaccinated were prevented from doing a risky activity, and the vaccinated were allowed to do the activity (with a lower risk due to their status), yes.

1cistrane
Well, if this is consistently applied across many events, the unvaccinated will not be allowed risky activities and the vaccinated will be allowed risky activities. Which means in practice consistently higher number of risky activities available for the vaccinated. I agree that this effect might not be significantly big and more measurements would be needed.

I have a hypothesis that seems to fit the data. These numbers are not given out for the purpose of collecting data on vaccine side effects (that's what VAERS is for). They are intended to provide specialized medical care directed at those who have recently gotten vaccines.

Evidence:
One commenter reported calling a Walgreens number. If this is representative, these are local pharmacy/medical practice numbers that people are calling, not some national reporting service.

Reassurance is one of the jobs of a anyone providing medical care. "Even though you aren't ... (read more)

Suppose 50% of vaccinated people would attend this event, and so would 50% of unvaccinated people, after considering the risks (ergo, there is no risk compensation). However, only vaccinated people are allowed to go to the event. Then the vaccinated people could have increased rates of Covid compared to unvaccinated people because of being more likely to attend superspreader events, even though they did not increase their level of risk compared to the unvaccinated population.

Whether this is the actual reason for the apparent negative effectiveness would depend on the actual percentages, and how common/dangerous superspreader events really are.

1NormanPerlmutter
This is true to an extent. Unvaccinated people are still able to attend. They just would need to forge their vaccination card. I think this is not particularly hard to do, though it's not trivially easy and many unvaccinated people would not do it for ethical reasons.
3cistrane
But effectively, the unvaccinated were not allowed to have the same level of risk as vaccinated if they couldn't come to the event, right?

I searched the CDC's Vaccine Adverse Event Reporting System (VAERS) and there are 474 reported cases of abnormal blood pressure following COVID-19 vaccination. Looking further in the Google search, I found a study (n = 113) which indicated increased risk of high blood pressure after vaccination, especially after previous infection.

Plainly, not everyone in the healthcare system is on the same page about side effects. I'd err on the side of the Walgreens person you talked to being more accurate, given that high blood pressure is a known side effect. Not known by that Nebraska Medicine doctor, apparently.

2Jayson_Virissimo
Will DM you the number.

I'm wondering what the details of your friends reporting attempts are. Who exactly did they talk to? VAERS is the official U.S. reporting system, what were their experiences with that? If there is an underreporting problem, we need as many specifics as we can get to combat it. Given that some vaccines do have well-known side effects among certain demographics, lots of people have been able to report their side effects successfully. We would need to figure out why your friend group has been far less successful to correct the issue.

Without an explicit probab... (read more)

-4Valentine
Sadly yes, at least on my side. I think your questions are very sane. Sadly I'm not the person to do this kind of data collection. The way some people have the opposite of a green thumb when it comes to plants, I have something like that for putting together numerically focused models. As soon as I move away from geometry or contact with physical reality, errors like 2+3=6 dominate and my models' output becomes gobbledegook. I was astoundingly good at geometry and utter garbage at algebra in math grad school. I think most of the people I'm referring to were pointed at VAERS. This was from months ago, buried in old Facebook threads, so it'd take quite a bit of digging to find and I'm not sure I could. So this is based on a fuzzy impression of seeing that acronym in that context. But I do recall many of them were given a hotline number to call if they got side effects, and in calling the number they got the "Well, the vaccines are safe, so these must be from something else" line. Yep. This has been part of my problem. I'm living in a sea of vastly deeper uncertainty than the people around me seem to think they're in. I'm hoping to do slightly better than either of "No one knows anything and anyone who claims otherwise is deluded" or "My tribe is right." I've just been having a lot of trouble finding that alternative. (…and this discussion is helping.)

What does it mean to Left-box, exactly? As in, under what specific scenarios are you making a choice between boxes, and choosing the Left box?

If you compare deaths to harms, you can end up scared of vaccines or Covid, depending on which you compare. If no one died of a vaccine in your group but one or two people were hurt by Covid, you will be scared of Covid. The question is, where does the framing come from? If no one died of Covid or a vaccine in your group (which seems to be the most likely case for a given group), which do you become scared of, and why?

2cistrane
Let's say you are a man in his 20s. in USA You believe (perhaps mistakenly) that if you get sick with covid, the government will foot the bill. On the other hand, if you get the rare myocarditis from the vaccine, you will be stuck with the bills. Does this create a weird incentive for a young man to avoid vaccination on the grounds of financial risk of ruin?

Perhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question.

I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have s... (read more)

My approach was not helpful at all, which I can clearly see now. I'll take another stab at your question.

You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason.

Are you unable to justify any probability assessments at all? Or is there some specific subset that you're having trouble with? Or have I failed to understand your question properly?

1Ege Erdil
I think you can justify probability assessments in some situations using Dutch book style arguments combined with the situation itself having some kind of symmetry which the measure must be invariant under, but this kind of argument doesn't generalize to any kind of messy real world situation in which you have to make a forecast on something, and it still doesn't give some "physical interpretation" to the probabilities beyond "if you make bets then your odds have to form a probability measure, and they better respect the symmetries of the physical theory you're working with". If you phrase this in terms of epistemic content, I could say that a probability measure just adds information about the symmetries of some situation when seen from your perspective, but when I say (for example) that there's a 40% chance Russia will invade Ukraine by end of year 2022 this doesn't seem to correspond to any obvious symmetry in the situation.
tivelen-10

Suppose an answer appeared here, and when you read it, you were completely satisfied by it. It answered your question perfectly. How would this world differ from one in which no answer remotely satisfied you? Would you expect yourself to have more accurate beliefs or help you achieve your goals?

If not, to the best of your knowledge, why have you decided to ask the question in the first place?

[This comment is no longer endorsed by its author]Reply
1Ege Erdil
I don't know what you mean here. One of my goals is to get a better answer to this question than what I'm currently able to give, so by definition getting such an answer would "help me achieve my goals". If you mean something less trivial than that, well, it also doesn't help me to achieve my goals to know if the Riemann hypothesis is true or false, but RH is nevertheless one of the most interesting questions I know of and definitely worth wondering about. I can't know how an answer I don't know about would impact my beliefs or behavior, but my guess is that the explanation would not lead us to change how we use probability, just like thermodynamics didn't lead us to change how we use steam engines. It was, nevertheless, still worthwhile to develop the theory.

After 1960 upper classes retained most of them, but the working classes experienced major declines. These were societal in extent; no blame assigned, it is simply what happened. 

Why that happened seems to be the key to reversing it, though. If the four virtues are needed to get things back together, but they can fade from society for reasons unknown, trying to get them back is like bailing water from a sinking ship.

Human sib-testing seems like it would be useful, for one thing. There was a post here about cloning great people from the past. We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.

In theory, this would have the same use cases as typical cloning, with an upfront cost and time delay. The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.

We could clone people successfully with no further advances in science, or unusual costs. The et... (read more)

3gwern
You would get one copy, but why not just use embryo selection instead to shift the population up? The point of cloning in breeding programs is typically to enable a single elite donor to make a huge and disproportionate contribution to the population, when you've already started using selection strategies like truncation selection to the top 10%. I hardly need to explain why no such things are at all viable for human populations. That didn't seem like it was all that big a deal. Even Dollie's siblings were fine. Human cloning hasn't been tried and found hard, it hasn't been tried. It's like CRISPR editing human babies; if your inference from the absence of CRISPR babies pre-He Jiankui was that "gosh, this must be very hard", you were very wrong. The true problems with human cloning were never any objections about it being super-difficult or hard to figure out. We could get human cloning to work with very little collective research effort. (How much did those Chinese primates cost? A few million bucks?) Compared to many things researchers accomplish... This is one reason why the Raelians weren't laughed out of the building when they announced they had cloned someone, because the experts knew deep down that a serious effort would definitely succeed. It's just no one wants to look like a Raelian or eugenicist, and there is little demand compared to more ordinary ways of having kids. (Who wants a clone of themself? Gay couples? No, because then one is left out, they'd rather have gametogenesis so they can have a true genetic child of them both.) So, it doesn't happen. And it'll continue on not happening for who knows how long. Lots of things are like that.

To me the question is this: given that people like communities and presumably would be happy to pay money for them, why isn't this currently a factor in the housing market?

 

I'm not sure what you're getting at here. Could you describe how the housing market would be different if this was currently a factor?

2J Bostock
As the unmet demand for housing at all levels is currently outstripped by supply, the optimal local move is to replace cheaper-per-space housing with expensive-per-space housing, where the latter is targeted towards rich people, whenever permission from local government can be obtained. If the unmet demand for housing at all levels were much smaller, then this move wouldn't be profitable by default and developers would have to choose where to build new marginal rich-people-targeted houses more carefully. For some human-desirable variable "strength of community", the rents/sale prices will be higher the more of that is present. Then the obvious choice is to build your new development such that the "strength of community" of the removed building is lowest, relative to the "strength of community" of the new building. The existence of this sort of choice would mean that existing communities that people like would be less likely to be removed.

As of now, we cannot unfreeze people who have been cryogenically frozen and successfully revive them. However, we can freeze 5-day-old fertilized eggs and revive them successfully years later. When exactly does an embryo become unrevivable?

Identical twins split at around one week after fertilization, so if it were possible to revive past then, we could freeze one twin and let the other gestate, and effectively clone the gestated twin whenever desired. Since we can artificially induce twinning, then we could give every newly born person the ability to be cl... (read more)

3gwern
When it becomes roughly rabbit-kidney-sized, I think the answer is, ~12g, so maybe around week 10? Sure, they could be 'cloned' (once). But it's a weird scenario. If you freeze development of one embryo but not the other, what motivation would you or the grownup one have to implant it later? (Outside, of course, of agricultural applications like cattle where one could use "sib testing with embryo transfer" - the point would be sib-testing to see how the first sibling performs compared to predictions to decide whether to implant more embryos/clones of it into surrogate mothers.)

In what way does this post do those bad things you mentioned? There is no mention of breaking innocent secrets, or secrets that would cause unjust ostracization, only patterns of actually harmful behavior.

If this post was made in confidence to you, would you tell others of it anyway?

Any pattern identified by induction either continues to hold, in which case it is fine to believe it, or it stops holding, in which case it must be adjusted. A generalization is a form of induction, and so acts the same. Could you provide an example of induction leading down a garden path?

1M. Y. Zuo
I can think immediately of Maxwell’s electromagnetic theory following the previously accepted theory of some ’Luminiferous aether’ which was at the time believed to be what light propagated through in a vacuum. Going from Newton -> to ‘Luminiferous aether’ using induction works fine, explains many observable phenomena, and is somewhat elegant too. Compare the next step to Maxwell’s equations which are horrendously baroque and reliant on much more sophisticated math with some really bizarre implications that were difficult to accept until Einstein came along. There doesn’t appear to be any way induction would have led you to the correct result had you been researching this topic in the mid 19th century. In fact many people did waste their lives on the garden path trying to induce onwards from the aether.

Knowing that the sun will come up in the morning is knowledge, and a success of induction. You do not even need to know that the Earth orbits the sun to have that knowledge. There is more to know about the sun, but that is yet more success of induction, and does not erase the previous success as if it were worse than knowing nothing.

An observed pattern in reality works so long as reality is observed to obey the pattern. If the pattern breaks, the previous inductive hypothesis is adjusted. "The sun will rise in the morning" is an excellent inductive predict... (read more)

1M. Y. Zuo
After some reflection of what you wrote and what I wrote before I think the problem I was trying to articulate is actually an interesting subset of a more general problem, namely the Halting problem, as it applies to humans. That is, how does one know when to stop inducing on a chain of inductions? Because surely there has to be a threshold, as with the neutrino example, beyond which induction will most likely yield a misleading answer that if taken at face value like every previous stage of induction, will lead down a garden path. Identifying that threshold every time may indeed be impossible without knowing everything.

This is something I've thought about recently. Even if you cannot identify your goals, you still have to make choices. The difficult part is in determining the distribution of possible M. In the end, I think the best I've been able to do is to follow convergent instrumental goals that will maximize the probability of fulfilling any goal, regardless of the actual distribution of goals. It is necessary to let go of any ego as well, since you cannot care about yourself more than another person if you don't care about anything, now can you?

1eugene_black
Yeah, I think for general activities we can make a list of things that have a positive utilities for most cases. For example: 1. Always care about your health and life. It is a base of everything. You can't do much if you are sick or dead. 2. Don't do anything illegal. You can't do much if you are in prison. 3. Keep good relationships with everybody if that does not take much effort. Social status and connections are useful for almost anything. 4. Money and time is a universal currency. Try to maximize your hourly income, but leave enough space for other things from the list. 5. Keep your mind in a good shape. Mind degradation can be very fast if you don't care. And you need it for rationality. 6. Spend some time for research of the M problem. Not too much because you will lose other items from list, but enough to make progress otherwise you will spend all your life in this goal-less loop and end regretting that you never spent enough effort to break out. etc. I think this can be a very wide list.

Interesting, that was something I considered, but didn't think was included in the idea of confidence. I have experienced that before. The stakes of a situation also seems like an objective fact, like competence. Perhaps the subjective evaluation of stakes and competence are entangled into the feeling of confidence. Maybe it has something to do with low variance of outcomes? If you have done something a lot, or if it doesn't really matter, then there isn't anything to worry about, because nothing that matters is up for grabs in the situation.

In the graphs, is "confidence" referring to "confidence in my ability to improve", then? And so we are graphing competence vs. ability to improve competence?

Otherwise, if I'm trying to place myself on one of these graphs, I'm simply unable to to anything but follow the dotted line. There is no "felt sense of confidence" that I can identify in myself, that doesn't originate in "I am competent at this".

3gbear605
I think the key here is when your self-rating of competence differs from your actual competence. If someone is bad at karate (low competence) but thinks they're really good (high confidence), they'll be in the bottom left area. This could go wrong for them if someone attacked them and they attacked back and totally messed up. On the other side, if someone is good at karate (high competence) but doubts themself (low confidence), they'd be in the top right area. This could go wrong for them if someone attacked them and they assumed that they're bad and didn't bother to fight back, even if they could've defended themself successfully.
4Duncan Sabien (Deactivated)
How about "anticipated okayness of failure"?  Like, one may typically proceed "more confidently" in an arena that doesn't matter/where there are low or no stakes, than in an arena where one fears the consequences of a misstep.  Does that match any subjective experience you have?

Knowledge is initially local. Induction works just fine without a global framework. People learn what works for them, and do that. Once the whole globe becomes interconnected, we each have more data to work with, but still most of it is irrelevant to any particular person's purposes. We cannot even physically hold a small fraction of the world's knowledge in our head, nor would we have any reason to.

Differences cannot be "settled" by words, only revealed and negotiated around. We have different knowledge because we have different predispositions, and diffe... (read more)

1M. Y. Zuo
Thanks for the neat thoughts. I truly believe some differences can be settled by words because there exist a class of differences that arise due to misperceptions, misunderstandings, etc., and are not grounded in anything substantive otherwise. Otherwise why would LW even exist? Induction works fine without a global framework only if the inducer can correctly perceive the relationships between what they are observing. Someone lacking such capability, would inevitably become confused in their analysis when they stumble upon some component, at a deep enough level, that has dependent relationship(s) on other things far away in space time or perception. I.e. It works until it doesn’t. For example, it wasn’t that long ago that no one on this planet understood how neutrinos worked, even though neutrinos are actually quite critical to understanding many interrelated phenomena. Some of which quite vital to understanding physics in general. Not to mention all the dependent fields. And induction by no means guaranteed anything close to the correct conclusion. Of course folks had hunches, or just pretended to know, and some pretended to be able to induce from what knowledge was at the time available. But in fact no one could really once they hit the wall of confusion surrounding neutrinos. Which is to say no one on this planet could correctly induce beyond a certain point in anything even if they wanted to do so, regardless of starting topic, from best ways of writing an essay or Buddhist history all the way down to neutrino physics. Everyone’s powers of induction would have failed sooner or later. It’s just that practically no one bothered to go so deep in their analysis, outside of some small groups, so it was assumed that induction just works. I imagine the same principle applies in any complex area of knowledge.

How is confidence different from the belief you have in your own competence? Your self-reported confidence and competence should always be the same.

Is there something I'm missing, some way that confidence is distinct from belief in competence?

1IrenicTruth
I had a similar issue. I could not do the exercise because I could not figure out how to evaluate confidence and competence separately. I always end up on the x=y line. Reading this thread did not help. "Anticipated okayness of failure" doesn't change much with time for the same task, so that is a vertical line. "Confidence" = "Self-related ability to improve" is an interesting interpretation (working on "confidence" would be working on learning skills). Still, intuitively it feels off from what the graphs say (though I haven't been able to put the disconnect into words). Thinking about the improv/parachute graph, maybe "confidence" is "willingness to attempt a task despite being incompetent." I'm giving up for now.
6Duncan Sabien (Deactivated)
The word "confidence" is a bit fuzzy, and is conflating a few things, here, but also I think that's okay. One of the things is the delta between one's self-reported competence and one's felt sense of confidence—I agree that in a certain sense they would always be the same if people were perfect perceivers and perfect reasoners, but they usually aren't. Another is something like ... maybe you would call it meta-confidence?  i.e. "I'm just a white belt and I really suck at these roundhouse kicks, but I'm going to confidently proceed throwing them, counting on repetition to help me improve!"

What is the mechanism, exactly? How do things unfold differently in high school vs. college with the laptop if someone attempts to steal it?

1dkirmani
It's probably because it's much easier to steal from somebody you don't know. When everyone knows everyone, little theft occurs.

Do you have any examples in mind?

3dkirmani
One thing is that it's much harder to blatantly steal from the commons in sub-Dunbar groups, because everyone knows everyone else, so formal norm-enforcement (police, RAs) is unnecessary. Social sanctions suffice. Despite students having high variance in family income, property theft was a non-issue. In high school, I could save myself one of the good seats in the library by leaving my laptop there, but if I did the same thing here in the engineering library (I go to UIUC, a large state college), my laptop would likely be taken within minutes. There is an asabiyyah in small groups that does not exist for larger ones.

If an altruist falls on hard times, they can ask other altruists for help, and those altruists can decide to divert their charitable donations if they consider it worth more to help the altruist. If the altruists are donating to the same charities, it is very likely that restoring the ability to donate for the in-need altruist will more than pay for the donations diverted.

If charitable donations cannot be faked, and an altruist's report of hard times preventing their charity can be trusted, then this will work to provide a financial buffer based purely on ... (read more)

1bice
A crowdfunding network is functionally very similar to what I alluded to in my last question: In a crowdfunding network, the crowdfunders decide for themselves if they consider the situation to be an emergency. In my version, a centralized fund would be the judge. I think the main advantage of my version is that people immediately know which door to knock on if they're in trouble.

I appreciate the benefits of the karma system as a whole (sorting, hiding, and recommending comments based on perceived quality, as voted on by users and weighted by their own karma), but what are the benefits of specifically having the exact karma of comments be visible to anyone who reads them?

Some people in this thread have mentioned that they like that karma chugs along in the background: would it be even better if it were completely in the background, and stopped being an "Internet points" sort of thing like on all other social media? We are not immun... (read more)

1Raelifin
I agree that there are benefits to hiding karma, but it seems like there are two major costs. The first is in reducing transparency; I claim that people like knowing why something is selected for them, and if karma becomes invisible the information becomes hidden in a way that people won’t like. (One could argue it should be hidden despite people’s desires, but that seems less obvious.) The other major reason is one cited by Habryka: creating common knowledge. Visible Karma scores help people gain a shared understanding of what’s valued across the site. Rankings aren’t sufficient for this, because they can’t distinguish relative quality from absolute quality (eg I’m much more likely to read a post with 200 karma, even if it’s ranked lower due to staleness than one that has 50).

by running a simulation of you and seeing what that simulation did.

A simulation of your choice "upon seeing a bomb in the Left box under this scenario"? In that case, the choice to always take the Right box "upon seeing a bomb in the Left box under this scenario" is correct, and what any of the decision theories would recommend. Being in such a situation does necessitate the failure of the predictor, which means you are in a very improbable world, but that is not relevant to your decision in the world you happen to be in (simulated or not).

Or: A simulation... (read more)

3Heighn
Good point. It seems to me Left-boxing is still the right answer though, since your decision procedure would still 'force' the predictor to predict you Left-box.

Such a system doesn't prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more "morally virtuous" to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement "X is more morally virtuous but not prescribed" coming from this moral system is not relevant to you. The system might as well say "X is more fribble". You won't care either way, unless the moral system also prescribes X, in which case X isn't supererogatory.

If I am not obliged to do something, then why ought I do it, exactly? If it's morally optimal, then how could I justify not doing it?

1JBlack
Many systems of morality are built more like "do no harm" than "do the best possible good at all times". That is, you are morally obliged to choose actions from a particular set in some circumstances, but they do not prescribe which action from that set.
1TAG
There are things that at good to do, but not obligatory.

Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn't, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is "optional", right?

I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursui... (read more)

3JBlack
I suspect that it's a combination of a lot of things. Slack, yes. Also Goodhart's law, in that optimizing directly for any particular expression of morality is liable to collapse it. There are also second and greater order effects from such moral principles: people who truly believe that people must always do the single most moral thing are likely to fail to convince others to live the same way, and so reduce the total amount of good that could be done. They may also disagree on what the single most moral thing is, and suffer from factionalization and other serious breakdowns of coordination that would be less likely in people who are less dogmatic about moral necessity. It's a difficult problem, and certainly not one that we are going to solve any time soon.
2Vladimir_Nesov
It's useful, but likely not valuable-in-itself for people to strive to be primarily morality optimizers. Thus the optimally moral thing could be to care about the optimally moral thing substantially less than sustainably feasible.
1TAG
That's all downstream of an implicit definition of "what I am obliged to do" as "the optimally moral thing". If what you are obliged to do is less demandingly, then there is space for the superogatory.

The only difference between this and current methods of painless and quick suicide is how "easy" it is for such an intention and understanding to turn into an actual case of non-existence.

Building the rooms everywhere and recommending their use to anyone with such an intention ("providing" them) makes suicide maximally "easy" in this sense. On a surface level, this increases freedom, and allows people to better achieve their current goals.

But what causes such grounded intentions? Does providing such rooms make such conclusions easier to come to? If someone... (read more)

1Astor
This is a thoughtful analysis of possible effects. Thank you for this. I do not want to have such rooms because I do not want to lose anybody ever. But sometimes there is a tendency in humans for quick decisions which would be supported by such an invention. I suppose this thought experiment shows me that blocking access to easy decision making has potential value.

I tested Otter.ai for free on the first forty minutes of one podcast (Education and Charity with Uri Bram), and listening at 2x speed allowed me to make a decent transcript at 1x speed overall with a few pauses for correction. The main time sinks were separating the speakers and correcting proper nouns, both of which seem to be features of the paid $8.33/month version of the program (which if used fully would cost $0.001/minute to use). If those two time sinks are in fact totally fixed by the paid version, I could easily imagine creating a decent accurate ... (read more)

The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence's origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (grea... (read more)

Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.

Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.

1andzuck
What empirical evidence would someone need to observe to believe that such an AGI, that is maximal in any of those traits, exists?

Your system may not worry about average life satisfaction, but it does seem to worry about expected life satisfaction, as far as I can tell. How can you define expected life satisfaction in a universe with infinitely-many agents of varying life-satisfaction? Specifically, given a description of such a universe (in whatever form you'd like, as long as it is general enough to capture any universe we may wish to consider), how would you go about actually doing the computation?

Alternatively, how do you think that computing "expected life satisfaction" can avoid the acknowledged problems of computing "average life satisfaction", in general terms?

1Chantiel
Please see this comment for an explanation.