Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 26 March 2016 09:38:18PM *  5 points [-]

Error
We are sorry but your session has expired.
Either you have been inactive for too long, you have cookies disabled for your browser, or there were problems with your connection.
Please contact namespace ( root@localhost ) for further assistance.

If you have to leave the computer in the middle of the survey, the software will punish you by throwing away your already completed answers. Really sucks after having completed about 100 of them. :(

What the hell was the purpose of checking whether someone was "inactive for too long"? So what, they were inactive, now they are active again, what's the big deal? Sometimes real life intervenes.

(Problems with connections happen too; I have a crappy wi-fi connection that I often have to restart several times a day. But that wasn't the case now. Also, why can't the software deal with disabled cookies? Calling root@localhost and waiting for an explanation...)

EDIT: If you happen to find yourself in a similar situation, use the e-mail mentioned in the article. As long as you remember enough data to uniquely identify your half-written response, the situation can be fixed.

Comment author: Logos01 26 March 2016 10:12:46PM *  7 points [-]

The software needs a way to track who was responding to which questions. That's because many of the questions relate to one another. It does that without requiring logins by using the ongoing http session. If you leave the survey idle then the session will time out. You can suspend a survey session by creating a login which it will then use for your answers.

The cookies thing is because it's not a single server but loadbalanced between multiple webservers (multiactive HA architecture). This survey isn't necessarily the only thing these servers will ever be running.

(I didn't write the software but I am providing the physical hosting it's running on.)

Comment author: Yvain 26 March 2016 02:49:37AM 1 point [-]

If you throw out the data, I request you keep the thrown-out data somewhere else so I can see how people responded to the issue.

Comment author: Logos01 26 March 2016 03:09:40AM 1 point [-]

Even if he threw out the data I have recurring storage snapshots happening behind the scenes (on the backing store for the OSes involved.)

Comment author: bogdanb 28 August 2013 08:03:42PM 2 points [-]

All known organisms that think have emotions.

Do you have any good evidence that this assertion applies to Cephalopods? I.e., either that they don’t think or that they have emotions. (Not a rhetorical question; I know about them only enough to realize that I don’t know.)

Comment author: Logos01 26 October 2013 05:22:09AM 3 points [-]

Do you have any good evidence that this assertion applies to Cephalopods?

Cephalopods in general have actually been shown to be rather intelligent. Some species of squid even engage in courtship rituals. There's no good reason to assume that given the fact that they engage in courtship, predator/prey response, and have been shown to respond to simple irritants with aggressive responses that they do not experience at the very least the emotions of lust, fear, and anger.

(Note: I model "animal intelligence" in terms of emotional responses; while these can often be very sophisticated, it lacks abstract reasoning. Many animals are more intelligent beyond 'simple' animal intelligence; but those are the exception rather than the norm.)

In response to Useful maxims
Comment author: Logos01 11 July 2012 03:39:32PM *  0 points [-]
  • Be comfortable in uncertainty.

  • Do whatever the better version of yourself would do.

  • Simplify the unnecessary.

Dual N-Back browser-based "game" in public alpha-testing state.

3 Logos01 10 July 2012 03:36AM

Link.

 

Found here: http://www.reddit.com/r/cogsci/comments/wb44q/my_dual_nback_browsergame_is_ready_for/

 

From the author:

 Quick notes:

  • If you experience any technical problems running the game, please let me know what browser and OS you're using.
  • If you're unfamiliar with what Dual N-Back is, [1] this is a good place to start reading.

 

 

Comment author: Vladimir_Golovin 26 April 2012 07:16:11PM *  -1 points [-]

Yep. Most mass-market space operas are guilty of this. Despite having knowledge and resources to fly to other planets, humans in them still have to shoot kinetic bullets at animals.

However, stories, in order to be entertaining (at least for the mainstream public), have to depict a protagonist (or a group thereof) who are changing because of conflict, and the conflict has to be winnable, resolvable -- it must "allow" the protagonist to use his wit, perseverance, luck and whatever else to win.

Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict. If the singularity was unfriendly, humans are either already disassembled for atoms, or soon will be -- and they have no chance to win against the AI because the capability gap is too big. Neither branch has much story potential.

This applies to game design as well -- enemies in a game built around a conflict have to be "repeatedly winnable", otherwise the game would become an exercise in frustration.

(I think there is some story / game potential in the early FOOM phase where humans still have a chance to shut it down, but it is limited. A realistic AI has no need to produce hordes of humanoid or monstrous robots vulnerable to bullets to serve as enemies, and it has no need to monologue when the hero is about to flip the switch. Plus the entire conflict is likely to be very brief.)

Comment author: Logos01 08 July 2012 12:26:58AM 1 point [-]

Now imagine a "more realistic" setting where humans went through a singularity (and, possibly, coexist with AIs). If the singularity was friendly, then this is an utopia which, by definition, has no conflict.

There is Friendliness and there is Friendliness. Note: Ambivalence or even bemused antagonism would qualify as Friendliness so long as humans were still able to determine their own personal courses of development and progress.

An AGI that had as its sole ambition the prevention of other AGIs and unFriendly scenarios would allow a lot of what passes for bad science fiction in most space operas, actually. AI cores on ships that can understand human language but don't qualify as fully sentient (because the real AGI is gutting their intellects); androids that are fully humanoid and perhaps even sentient but haven't any clue why that is so (because you could rebuild human-like cognitive faculties by reverse-engineering black-box but if you actually knew what was going on in the parts you would have that information purged...) -- so on and so on.

And yet this would qualify as Friendly; human society and ingenuity would continue.

Comment author: gwern 19 June 2012 01:33:30PM 3 points [-]

A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will.

Oh, it did try. Unfortunately, Adam exercised his free will in the wrong way. Better luck next universe.

Comment author: Logos01 20 June 2012 07:22:41PM -1 points [-]

"If it weren't for my horse, I never would've graduated college." >_<

Comment author: Will_Newsome 19 June 2012 01:36:28AM 7 points [-]

There are other arguments too, that I haven't seen made in the theology literature. Like, God instantiated all possible universes with net positive utility, because that's more utility than just instantiating the universe with the most utility. This is an extremely basic idea, I really don't know why I haven't seen it before.

Comment author: Logos01 19 June 2012 12:14:53PM -1 points [-]

An omnipotent omnibenevolent being would have no need for such "shorthand" tricks to create infinite worlds without suffering. Yes you could always raise another aleph level for greater infinities; but only by introducing suffering at all.

Which violates omnibenevolence.

Comment author: gwern 18 June 2012 03:54:07PM *  4 points [-]

See Plantinga's free will defense for human and the variant for natural evils; it defuses the logical argument from evil. (Of course it does this by postulating 'free will', whatever that is, but I don't think free will is nearly as clear cut a p=~0 as the existence of evils...)

Comment author: Logos01 19 June 2012 12:12:32PM 3 points [-]

I don't buy it. A superhuman intelligence with unlimited power and infinite planning time and resources could create a world without suffering even without violating free will. And yet we have cancer and people raping children.

Comment author: Eliezer_Yudkowsky 15 June 2012 05:30:52AM 21 points [-]

I am thiiiiiiiiis confident!

(Holds arms wide, then accepts any well-specified bet as if the actual probability of Christianity were zero, i.e., with betting prices corresponding to the probability of the specified evidence being observed, given the fixed assumption that Christianity is false.)

Comment author: Logos01 18 June 2012 01:30:47PM *  2 points [-]

I am thiiiiiiiiis confident!

I'm surprised to see this dialogue make so little mention of the material evidence* at hand with regards to the specific claims of Christianity. I mean; a god which was omnipotent and omnibenevolent would surely create a world with less suffering for humanity than what we conjecture an FAI would orchestrate, yes? Color me old-fashioned but I assign the logically** impossible a zero probability (barring of course my being mistaken about logical impossibilities).

* s/s//
** s/v/c/

View more: Next