All of IlluminateReality's Comments + Replies

Have your probabilities for AGI on given years changed at all since this breakdown you gave 7 months ago? I, and I’m sure many others, defer quite a lot to your views on timelines, so it would be good to have an updated breakdown.

15% - 2024
15% - 2025
15% - 2026
10% - 2027
5% - 2028
5% - 2029
3% - 2030
2% - 2031
2% - 2032
2% - 2033
2% - 2034
2% - 2035

8Daniel Kokotajlo
My 2024 probability has gone down from 15% to 5%. Other than that things are pretty similar, so just renormalize I guess.  

Are you implying that all of the copies of yourself should meaningfully be thought of as the same person? Why would making more copies of yourself increase your utility?

Also, I take it from “never, ever be any escape” that you believe quantum immortality is true?

2Tomás B.
I think about anticipated future experiences. All future slices of me have the same claim to myself. 

From the point of view of trying to reduce personal risk of s-risks, trying to improve the worlds prospects seems like a way to convince yourself you’re doing something helpful, without meaningfully reducing personal s-risk. I have significant uncertainty about even the order of magnitude I could reduce personal s-risk through activism, research, etc, but I’d imagine it would be less than 1%. To be clear, this does not mean that I think doing these things are a waste of time, in fact it’s probably some of the highest expected utility things anyone can do,... (read more)

2mishka
Yes, we don't really know how reality works, that's one of the problems. We don't even know if we are in a simulation. So, it's difficult to be certain. It did occur to me that they will try to "wake you up" once (if that's feasible at all) and ask if you really meant to stay dead (while respecting your free will and refraining from manipulation). And it did occur to me that it's not clear if resurrection is possible, or if bad singularity would bother to resurrect you, even if it's possible. So, in reality, one needs to have a better idea about all kinds of probabilities, because the actual "tree of possible scenarios" is really complicated (and we know next to nothing about those). So, I ended up noting that This does reflect my uncertainty about all this... ---------------------------------------- Ah, I have not realized that you were talking not just about transformation being sufficiently radical ("end of the world known to us"), but about it specifically being bad... My typical approach to all that is to consider non-anthropocentric points of view (this allows one to take a step back and to think in a more "invariant way"). In this sense, I suspect that "universal X-risk" (that is, the X-risk which threatens to destroy everything including the AIs themselves) dominates (I am occasionally trying to scribble something about that: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential). In this sense, while it is possible to have scenarios where huge sufferings are inflicted, but the "universal X-risk" is somehow avoided, this does not seem to my (unsubstantiated) intuition to be too likely. The need to control the "universal X-risk" and to protect interests of individual members of the AI ecosystem requires a degree of "social harmony" of some sort within the AI ecosystem. I doubt that anthropocentric approaches to AI alignment are likely to fare well, but I think that a harmonious AI ecosystem where a

I appreciate the concern. I’m actually very averse to committing suicide and any motivated reasoning on my part will be on the side of trying to justify staying alive. (To be clear, I think I have ample reasons to stay alive, atleast for the time being). My concern is that there might be good reasons to commit suicide (at some point) in which case I would rather know of them than be ignorant.

I think that this post by Rob Bensinger should help you understand various ideas around self-identity. 

Hi everyone!
I found lesswrong at the end of 2022, as a result of ChatGPT’s release. What struck me fairly quickly about lesswrong was how much it resonated with me. Much of the ways of thinking discussed on lesswrong were things I was already doing, but without knowing the name for it. For example, I thought of the strength of my beliefs in terms of probabilities, long before I had ever heard the word “bayesian”.

Since discovering lesswrong, I have been mostly just vaguely browsing it, with some periods of more intense study. But I’m aware that I haven’t be... (read more)

2habryka
Welcome! I hope you have a good time here!