I won the Danish National Biology Olympiad semifinal (as 1/15), and thus I qualify for the final, where I will have the chance to be 1 of 4 Danes participating in the International Biology Olympiad.
You might consider clicking on the username. The second number shows karma in last 30 days and if it is 0 you might not get answers.
That's a pretty good heuristic. OTOH, up until this week, my karma in the last 30 days was 0. Now that I'm starting the sequences soon (in the form of "Rationality: From AI to Zombies"), I suspect I'll involve myself in the community some more. Then again, my account didn't functionally exist until recently, mainly being there for the purpose of reserving the name.
Hi! Semi-new lurker here. What is the current etiquette on necroing? I didn't find any official ettiquette guide.
Va gur ynaq bs gur oyvaq...
I see (hah!) now. Thank you, and even more so for providing it rot13.
I didn't realize until you said it.
I still don't get it. Could you (or someone else) please explain it?
Huh. I got the same answer, but a different way.
Rnpu vgrz vf znqr hc bs gur cerfrapr be nofrapr bs bar bs fvk onfvp ryrzragf. Rnpu ryrzrag nccrnef sbhe gvzrf, rkprcg gubfr gjb.
I got the same answer in a third way.
Gur ynfg vgrz va n ebj vf znqr sebz rirelguvat va gur svefg gjb cynprf, rkprcg gung juvpu gurl unir va pbzzba.
EDIT: There's a simpler name for what I did: KBE, ubevmbagnyyl.
"I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me."
That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren't technically informed on the subject, which will be most people.
Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you're studying. Silicon friends good enough that you may not be able to tell if you're talking with a human or a bot, and in virtual worlds like Second Life, people won't want to.
I predict:
People will anthropomorphise these things. They won't just have the "sensation" that they're talking to a human being, they'll do theory of mind on them. They won't be able not to.
The actual principles of operation of these systems will not resemble, even slightly, the "minds" that people will project onto them.
People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.
And because of that, systems at that level will be dangerous already.
That seems pretty plausible. I have a hard enough time already preventing myself from anthropomorphizing my dog. Ascribing human emotions to animals is easy to accidentally do.
Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
Fabulous story idea.
It is a power of the witches in Lyra's world in Philip Pullman's "His Dark Materials".
So, I consider the "go back in time" aspect of this unnecessarily confusing... the important part from my perspective is what events my timeline contains, not where I am on that timeline. For example, suppose I'm offered a choice between two identical boxes, one of which contains a million dollars. I choose box A, which is empty. What I want at that point is not to go back in time, but simply to have chosen the box which contained the money... if a moment later the judges go "Oh, sorry, our mistake... box A had the money after all, you win!" I will no longer regret choosing A. If a moment after that they say "Oh, terribly sorry, we were right the first time... you lose" I will once more regret having chosen A (as well as being irritated with the judges for jerking me around, but that's a separate matter). No time-travel required.
All of that said, the distinction you raise here (between regretting an improperly made decision whose consequences were undesirable, vs. regretting a properly made decision whose consequences were undesirable) applies either way. And as you say, a rational agent ought to do the former, but not the latter.
(There's also in principle a third condition, which is regretting an improperly made decision whose consequences were desirable. That is, suppose the judges rigged the game by providing me with evidence for "A contains the money," when in fact B contains the money. Suppose further that I completely failed to notice that evidence, flipped a coin, and chose B. I don't regret winning the money, but I might still look back on my decision and regret that my decision procedure was so flawed. In practice I can't really imagine having this reaction, though a rational system ought to.)
(And of course, for completeness, we can consider regretting a properly made decision whose consequences were desirable. That said, I have nothing interesting to say about this case.)
All of which is completely tangential to your lexical question.
I can't think of a pair of verbs that communicate the distinction in any language I know. In practice, I would communicate it as "regret the process whereby I made the decision" vs "regret the results of the decision I made," or something of that sort.
So, I consider the "go back in time" aspect of this unnecessarily confusing... the important part from my perspective is what events my timeline contains, not where I am on that timeline.
Indeed, that is my mistake. I am not always the best at choosing metaphors or expressing myself cleanly.
regretting an improperly made decision whose consequences were undesirable, vs. regretting a properly made decision whose consequences were undesirable
That is a very nice way of expressing what I meant. I will be using this from now on to explain what I mean. Thank you.
Your comment helped me to understand what I myself meant much better than before. Thank you for that.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Damn, I can only see the abstract. I'd like to see that paper if anyone has a copy.
They seem to be fingering endocrine genes, but adrenal rather than thyroid. A lot of alternative medicine people talk about 'adrenal fatigue' in this context, but I hadn't been paying much attention to that since 'real' doctors don't think it's a thing.
But I don't know what I'm talking about! Can anyone who does read that paper and tell us what it means?
Both the paper and an update to it can be found quite easily on Library Genesis.