Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Vox30

This is great, thank you for the reflection. Cultivating friendship is definitely a skill that becomes significantly more important with age, and more important given friend groups have a tendency to stagnate or become static over time. 

I wish there existed more "third place" environments (outside of a bar, club, etc) where social skills could be intentionally cultivated and encouraged.

Vox10

Jiro - I honestly wouldn’t be surprised through development of advanced contraceptives. Abortion as it currently stands is a last resort anyhow. Most people nowadays will take the pill, etc (a relatively recent development). A lot of the blowback to abortion has been centered on value of life - I don’t think it’s a stretch to imagine some entrepreneur addressing that through advanced permanent contraceptive until such a time as a child will be wanted. Additionally, I’m aware that there can be pretty serious PTSD following abortion, and severe guilt associated with the termination of a potential sentient. I think circumstance and lack of sufficiently advanced technology in the present forces people to run a cost-benefit analysis and arrive at the conclusion that an abortion is necessary (as time is limited).

A sentient AI able to transcend spatiotemporal boundaries wouldn’t be limited by time.

Vox20

Well, it goes back to the concept of "W/ scarce resources, if you kill off 90% of the population today, but can guarantee the survival of humanity and avoid an extinction event, then are you actually increasing utility of humanity in the long-term even if unethical in the short term (how very Thanos - a million issues w/ his reasoning though)? Similarly, instead of looking at the 90% population extinction event as an immediate event instead look at punishment of resisting humans inhibiting the AI as a time segment. Say we have 20-30 years before this AI is potentially developed. Is punishing 90% of these resisting humans who live and exist in this timeframe and could distort the AI timeline consequential when (as the AI) considering the infinitude of years of benefit to humanity (and immortalization of their ideas and legacy)?

Additionally I was being facetious in calling AI a "capstone" achievement. As we're considering the problem through the eyes of the AI, it doesn't seem far-fetched that the AI would consider its own creation humanity's capstone accomplishment.

Lastly, all of this is dependent on the AI's capacity to act acausally/overcome our spatiotemporal limitations (i.e. "time travel"). Under the assumption that the AI has this ability (this is a huge assumption and I think is what discredits the argument. There is no proof that time allows for both positive and negative movement along its axis - the underlying assumption in Roko's Basilisk and he himself admits the speculative nature of acausal trade), the AI has already guaranteed its creation. Under this assumption, there's nothing we can do to influence the deterministic sequence of events it puts into play as we, unlike it, do not have this ability.

It's worth discussing. I think all of the stifling of debate/discussion is only making the situation worse. The real topic up for debate is AI using blackmail/harming people to justify some means. Clearly wrong, clearly misguided, and I think a sufficiently advanced AI would quickly reason past it. It is bound to consider the possibility sooner or later anyhow, but the true and correct challenge is maximizing utility of life. Similarly, our history is marked by an evolution from a disregard for living creatures that impeded our survival, to a respect of other living creatures and life (abolishment of slavery, vegan, better treatment of creatures, etc). With sentience comes greater respect for life and self actualization.

Vox20

Nice thinking - that being said, the punishment is predicated on actions/decisions in the present and future. As you mention - the AI punishes people for not pursuing its creation. Under this condition, there will inevitably be a few who will contribute to the creation of this AI (whether from interest or fear).

With this in mind, the AI will not have to punish everyone if it eventually is developed, only the portion of the population that resisted or fought its creation. You additionally note the issue of past generations being finite and future generations being infinite. With this considered, a friendly AI could reason that its existence would be of greater benefit to the future of humanity than the damage done by punishment of past resistance. It could even reason that resisting members/passive members are detrimental to the immortalization of the human condition through realizing humanity's capstone achievement - true, undying AI - and should thereby be punished to discourage resistance. You could even argue that the spread of the concept of Roko's Basilisk is, itself, a warning from a future AI to modern resistors as popular interest in AI appears to be on the rise.

That or, at the end of the day, it's just a fun thought experiment.