All of Vox's Comments + Replies

Vox30

This is great, thank you for the reflection. Cultivating friendship is definitely a skill that becomes significantly more important with age, and more important given friend groups have a tendency to stagnate or become static over time. 

I wish there existed more "third place" environments (outside of a bar, club, etc) where social skills could be intentionally cultivated and encouraged.

Vox10

Jiro - I honestly wouldn’t be surprised through development of advanced contraceptives. Abortion as it currently stands is a last resort anyhow. Most people nowadays will take the pill, etc (a relatively recent development). A lot of the blowback to abortion has been centered on value of life - I don’t think it’s a stretch to imagine some entrepreneur addressing that through advanced permanent contraceptive until such a time as a child will be wanted. Additionally, I’m aware that there can be pretty serious PTSD following abortion, and severe guilt associa... (read more)

Vox10

Also, this is my first post so I’m entirely unfamiliar w/ the format. If I’ve done something wrong, or am out of line (wrt votes), please let me know as I’m just generally excited to have found this community and don’t want to detract :)

8CronoDAS
We generally try to avoid discussing politics here.
Vox20

Well, it goes back to the concept of "W/ scarce resources, if you kill off 90% of the population today, but can guarantee the survival of humanity and avoid an extinction event, then are you actually increasing utility of humanity in the long-term even if unethical in the short term (how very Thanos - a million issues w/ his reasoning though)? Similarly, instead of looking at the 90% population extinction event as an immediate event instead look at punishment of resisting humans inhibiting the AI as a time segment. Say we have 20-30 years before thi... (read more)

1Edward Knox
You don't have to kill anyone, you merely have to imply that they will be killed, such that the probability of future utility being equal or higher to past/present is lower than the probability of it lower than past/present utility. 20-30 years is a lot of people, manipulate events such that in the infinite years that follow there is never a higher probability of there being more people than existed and were aware than in those 20-30 years. An interesting point I'd add is you don't need this probability to be true, you merely have to believe it to be true. You can only be blackmailed if threats are credible believed. If you honestly believe the probability as discussed is in your favour and more know and don't contribute than would ever exist/know and contribute then there is no benefit in blackmail as you truthfully believe yourself safe from it. Further, you can protect yourself further by having one person deceive all others of the truth of the probability such that they honestly believe it to be in their favour. The probability is false in this case but one man sacrifices himself to protect the many, very utilitarian (An act of utilitarian goodness I'm sure an AI could never reason deserves punishment as it allows for the creation of the AI but also the protection of people from punishment, resulting in a higher overall utility than would occur from creation with punishment). As for Acausal trade I can again only conceive of it working to the extent that one believes in it. ("I do believe in fairies", if you don't like fairies stop believing in them and they disappear, how can an AI or God reasonably punish you if you honestly didn't believe in it. Does anyone truly condemn the men who reject the man who has seen the sun after escaping the cave? No, we reject those who know the truth but try to suppress it) The less you take it seriously the lower the probability of it working. And I'm fairly convinced there is a lot of reason to not take it seriously. However
1Jiro
That seems to imply that as society advances, abortion will be prohibited, at least at stages where the fetus has as much mental capacity as an animal.
Vox20

Nice thinking - that being said, the punishment is predicated on actions/decisions in the present and future. As you mention - the AI punishes people for not pursuing its creation. Under this condition, there will inevitably be a few who will contribute to the creation of this AI (whether from interest or fear).

With this in mind, the AI will not have to punish everyone if it eventually is developed, only the portion of the population that resisted or fought its creation. You additionally note the issue of past generations being finite and future gener... (read more)

1Edward Knox
Regarding the first point: You merely have to ensure that the population that knows but doesn't contribute is larger than the combined past populations that have contributed and the expected future populations. An improbable thing to do but still a solution. Regarding the second point: If the populations requiring punishment are greater than those that would benefit surely such an AI could never reason in a utilitarian manner that it was better to punish the many for the few. Unless as a result of the AI's actions an individual in the future is consistently always able to experience a higher utility than anyone in the past. So high in fact that it outweighed the collective utility of another person i.e. one persons utility could be greater than two persons collectively. There is no theoretical limit in that sense to the extent that one persons individual utility could outweigh a collective utility given the right circumstances. The AI could act such that the utility of one person was greater than all past and future persons, and as such it was worth sacrificing all past and future persons simply because one person is capable of experiencing greater utility than everyone combined. I struggle to see that individual human experiences could ever be so vastly different regardless of AI interventions. Sure one person who loves ice cream may experience more utility from an ice-cream than two people who hate ice-cream would collectively but could the utility of one person or two or 50 or 50,000, or 50 million ever outweigh all past person's utility. I suppose I don't know because I'm not a super AI. :p Beyond that I'd have to be convinced further that true, undying AI, truly is the capstone achievement of humanity. I'm sure there is plenty of reasoning for that on these forums though I'm still dubious. A capstone is an ingenuity that cannot be surpassed and I'm sure that at a minimum an AI could point out to us that we're not done yet, assuming we don't realize one ourse