Wiki Contributions

Comments

The software needs a way to track who was responding to which questions. That's because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.

The cookies thing is because it's not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn't necessarily the only thing these servers will ever be running.

(I didn't write the software but I am providing the physical hosting it's running on.)

Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)

Do you have any good evidence that this assertion applies to Cephalopods?

Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There's no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.

(Note: I model "animal intelligence" in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond 'simple' animal intelligence; but those are the exception rather than the norm.)

  • Be comfortable in uncertainty.

  • Do whatever the better version of yourself would do.

  • Simplify the unnecessary.

Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.

There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.

An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don't qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven't any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.

And yet this would qualify as Friendly; human society and ingenuity would continue.

"If it weren't for my horse, I never would've graduated college." >_<

An omnipotent omnibenevolent being would have no need for such "shorthand" tricks to create infinite worlds without suffering. Yes you could always raise another aleph level for greater infinities; but only by introducing suffering at all.

Which violates omnibenevolence.

I don't buy it. A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will. And yet we have cancer and people raping children.

I am thiiiiiiiiis confident!

I'm surprised to see this dialogue make so little mention of the material evidence* at hand with regards to the specific claims of Christianity. I mean; a god which was omnipotent and omnibenevolent would surely create a world with less suffering for humanity than what we conjecture an FAI would orchestrate, yes? Color me old-fashioned but I assign the logically** impossible a zero probability (barring of course my being mistaken about logical impossibilities).

* s/s//
** s/v/c/

but then changes its mind and brings us back as a simulation."

This is commonly referred to as a "counterfactual" AGI.

Load More