IlluminateReality

Wiki Contributions

Comments

Sorted by

Have your probabilities for AGI on given years changed at all since this breakdown you gave 7 months ago? I, and I’m sure many others, defer quite a lot to your views on timelines, so it would be good to have an updated breakdown.

15% - 2024
15% - 2025
15% - 2026
10% - 2027
5% - 2028
5% - 2029
3% - 2030
2% - 2031
2% - 2032
2% - 2033
2% - 2034
2% - 2035

Are you implying that all of the copies of yourself should meaningfully be thought of as the same person? Why would making more copies of yourself increase your utility?

Also, I take it from “never, ever be any escape” that you believe quantum immortality is true?

From the point of view of trying to reduce personal risk of s-risks, trying to improve the worlds prospects seems like a way to convince yourself you’re doing something helpful, without meaningfully reducing personal s-risk. I have significant uncertainty about even the order of magnitude I could reduce personal s-risk through activism, research, etc, but I’d imagine it would be less than 1%. To be clear, this does not mean that I think doing these things are a waste of time, in fact it’s probably some of the highest expected utility things anyone can do, but it’s not a particularly effective way to reduce personal s-risk. However, this plausibly changes if you factor in that being someone who helped make the singularity go well could put you in a favourable position post-singularity.

Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.

I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?

Also, just a reminder that my post was about what to do conditional on the world starting to end (think nanofactories and geoengineering and the AI/s being obviously not aligned). This means that the obvious paths to utopia are already ruled out by this point, although perhaps we could still get a slice of the lightcone for acausal trade/ decision theoretic reasons.

Also yeah, whether suicide is rational or not in this situation obviously comes down to your personal probabilities of various things.

I appreciate the concern. I’m actually very averse to committing suicide and any motivated reasoning on my part will be on the side of trying to justify staying alive. (To be clear, I think I have ample reasons to stay alive, atleast for the time being). My concern is that there might be good reasons to commit suicide (at some point) in which case I would rather know of them than be ignorant.

I think that this post by Rob Bensinger should help you understand various ideas around self-identity. 

Hi everyone!
I found lesswrong at the end of 2022, as a result of ChatGPT’s release. What struck me fairly quickly about lesswrong was how much it resonated with me. Much of the ways of thinking discussed on lesswrong were things I was already doing, but without knowing the name for it. For example, I thought of the strength of my beliefs in terms of probabilities, long before I had ever heard the word “bayesian”.

Since discovering lesswrong, I have been mostly just vaguely browsing it, with some periods of more intense study. But I’m aware that I haven’t been improving my world model at the rate I would like. I have been spending far too much time reading things that I basically already know, or that only give me a small bit of extra information. Therefore, I recently decided to pivot to optimising for the accuracy of my map of the territory. Some of the areas I want to gain a better understanding of are perhaps some of the more ”weird” things discussed on lesswrong, such as quantum immortality or the simulation hypothesis.