[Infographic] A reminder as to how far the rationality waterline can climb (at least, for the US).
Encountered at: https://whyevolutionistrue.wordpress.com/2011/03/20/click-and-weep/scientific-literacy/
Encountered at: https://whyevolutionistrue.wordpress.com/2011/03/20/click-and-weep/scientific-literacy/
Link. Found here: http://www.reddit.com/r/cogsci/comments/wb44q/my_dual_nback_browsergame_is_ready_for/ From the author: > Quick notes: > > * If you experience any technical problems running the game, please let me know what browser and OS you're using. > * If you're unfamiliar with what Dual N-Back is, [1] this is a good place to start...
Located here: http://www.macrovu.com/CCTGeneralInfo.html Map 1: Can computers think? Map 2: Can the Turing test determine whether computers can think? Map 3: Can physical symbol systems think? Map 4: Can Chinese Rooms think? Map 5, Part 1: Can connectionist networks think? Map 5, Part 2: Can computers think in images? Map...
Link here. Al Jazeera website link for the video disinclined. A brief synopsis from the Al Jazeera website: > Cyborgs, brain uploads and immortality - How far should science go in helping humans exceed their biological limitations? These ideas might sound like science fiction, but proponents of a movement known...
Check it out here. Brief summary: Dawkins demonstrates a classic "Prisoner Dilemma AI tournament". No big surprise to us today, but at the time the revelation that Tit for Tat is one of -- if not *the* -- most effective strateg(y|ies) was a surprising result. He goes on to demonstrate...
For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?" Most of the discussion that I have noted on the topic...
Encountered at: https://whyevolutionistrue.wordpress.com/2011/03/20/click-and-weep/scientific-literacy/
ICMS (intracortical microstimulation) was successfully demonstrated for inducing artificial sensory stimulation in rhesus monkeys. (This is a significant -- albeit miniscule -- step forward for data-in brain-computer interfaces.) -- http://www.nature.com/nature/journal/vaop/ncurrent/full/nature10489.html --
Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)
Do you have any good evidence that this assertion applies to Cephalopods?
Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There's no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.
(Note: I model "animal intelligence" in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond 'simple' animal intelligence; but those are the exception rather than the norm.)
Be comfortable in uncertainty.
Do whatever the better version of yourself would do.
Simplify the unnecessary.
Found here: http://www.reddit.com/r/cogsci/comments/wb44q/my_dual_nback_browsergame_is_ready_for/
From the author:
Quick notes:
- If you experience any technical problems running the game, please let me know what browser and OS you're using.
- If you're unfamiliar with what Dual N-Back is, [1] this is a good place to start reading.
Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.
There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.
An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don't qualify... (read more)
"If it weren't for my horse, I never would've graduated college." >_<
An omnipotent omnibenevolent being would have no need for such "shorthand" tricks to create infinite worlds without suffering. Yes you could always raise another aleph level for greater infinities; but only by introducing suffering at all.
Which violates omnibenevolence.
I don't buy it. A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will. And yet we have cancer and people raping children.
I am thiiiiiiiiis confident!
I'm surprised to see this dialogue make so little mention of the material evidence* at hand with regards to the specific claims of Christianity. I mean; a god which was omnipotent and omnibenevolent would surely create a world with less suffering for humanity than what we conjecture an FAI would orchestrate, yes? Color me old-fashioned but I assign the logically** impossible a zero probability (barring of course my being mistaken about logical impossibilities).
* s/s//** s/v/c/
but then changes its mind and brings us back as a simulation."
This is commonly referred to as a "counterfactual" AGI.
Located here: http://www.macrovu.com/CCTGeneralInfo.html
Map 1: Can computers think?
Map 2: Can the Turing test determine whether computers can think?
Map 3: Can physical symbol systems think?
Map 4: Can Chinese Rooms think?
Map 5, Part 1: Can connectionist networks think?
Map 5, Part 2: Can computers think in images?
Map 6: Do computers have to be conscious to think?
Map 7: Are thinking computers mathematically possible?
These are available, apparently, for purchase in their full (wall-poster) size.
Al Jazeera website link for the video disinclined.
A brief synopsis from the Al Jazeera website:
Cyborgs, brain uploads and immortality - How far should science go in helping humans exceed their biological limitations? These ideas might sound like science fiction, but proponents of a movement known as transhumanism believe they are inevitable.
In this episode of The Stream, we talk to bioethicist George Dvorsky; Robin Hanson, a research associate with Oxford’s Future of Humanity Institute; and Ari N. Schulman, senior editor of The New Atlantis, about the ethical implications of transhumanism.
Discuss below.
Brief summary: Dawkins demonstrates a classic "Prisoner Dilemma AI tournament". No big surprise to us today, but at the time the revelation that Tit for Tat is one of -- if not *the* -- most effective strateg(y|ies) was a surprising result. He goes on to demonstrate animals employing the Tit for Tat strategy. Assumptions of generosity, with vengefulness, appear to be strongly selected for.
For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"
Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think... (read 322 more words →)
Encountered at: https://whyevolutionistrue.wordpress.com/2011/03/20/click-and-weep/scientific-literacy/
ICMS (intracortical microstimulation) was successfully demonstrated for inducing artificial sensory stimulation in rhesus monkeys. (This is a significant -- albeit miniscule -- step forward for data-in brain-computer interfaces.)
-- http://www.nature.com/nature/journal/vaop/ncurrent/full/nature10489.html --
Greetings, fellow LessWrongians.
What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself. As such I will attempt to restrict any responses to comments I give to explanations of points of fact or explanations of my own opinions if directly requested, but otherwise will not argue any particulars for purposes of persuasion.
For a few years now a general notion -- what originally led me to discover the LessWrong site itself, in fact -- has rattled around in my brain, which I only today have derived... (read 593 more words →)
The software needs a way to track who was responding to which questions. That's because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.
The cookies thing is because it's not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn't necessarily the only thing these servers will ever be running.
(I didn't write the software but I am providing the physical hosting it's running on.)