Comment author: JustinShovelain 14 March 2010 08:23:08AM *  5 points [-]

Poll: Do you have older siblings or are an only child?

karma balance

Coffee: When it helps, when it hurts

43 JustinShovelain 10 March 2010 06:14AM
Many people take caffeine always, or never. But the evidence is clear: for some tasks, drink coffee -- for others, don't.
Caffeine:
So:
Use  caffeine for short-term performance on a focused task (such as an exam).
Avoid  caffeine for tasks that require broad creativity and long-term learning.
(Disclaimer: The greater altertness, larger short-term memory capacity, and eased recall might make the memories you do make of higher quality.)
At least, this is my take. But the issue is convoluted enough that I'm unsure. What do you think?
Comment author: JustinShovelain 10 March 2010 12:48:39AM 7 points [-]

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

Comment author: JustinShovelain 02 March 2010 10:25:27PM 2 points [-]

Closely related to your point is the paper, "The Epistemic Benefit of Transient Diversity"

It describes and models the costs and benefits of independent invention and transient disagreement.

Meetup: Bay Area: Sunday, March 7th, 7pm

6 JustinShovelain 02 March 2010 09:18PM

Overcoming Bias / Less Wrong meetup in the San Francisco Bay Area at SIAI House on March 7th, 2010, starting at 7PM.

Eliezer Yudkowsky, Alicorn, and  Michael Vassar will be present.

Some other extra guests - Wei Dai, Stuart Armstrong, and Nick Tarleton - will be also be there, following our short Decision Theory mini-workshop.

Comment author: JRMayne 15 January 2010 01:20:37AM -1 points [-]

I'm convinced of utilitarianism as the proper moral construct, but I don't think an AI should use a free-ranging utilitarianism, because it's just too dangerous. A relatively small calculation error, or a somewhat eccentric view of the future can lead to very bad outcomes indeed.

A really smart, powerful AI, it seems to me, should be constrained by rules of behavior (no wiping out humanity/no turning every channel into 24-7 porn/no putting everyone to work in the paperclip factory), The assumption that something very smart would necessarily reach correct utiltarian views seems facially false; it could assume that humans must think like it does, or assume that dogs generate more utility with less effort due to their easier ability to be happy, or decide that humans need more superintelligent machines in a great big hurry and should build them regardless of anything else.

And maybe it'd be right here or there. But maybe not. I think almost definitionally that FAI cannot be full-on, free-range utilitarian of any stripe. Am I wrong?

Comment author: JustinShovelain 15 January 2010 05:35:15PM *  2 points [-]

Why are you more concerned about something with unlimited ability to self reflect making a calculation error than about the above being a calculation error? The AI could implement the above if the calculation implicit in it is correct.

Comment author: Stuart_Armstrong 15 January 2010 10:13:20AM 2 points [-]

I believe you can strip the AI of any preferences towards human utility functions with a simple hack.

Every decision of the AI will have two effects on expected human utility: it will change it, and it will change the human utility functions.

Have the AI make its decisions only based on the effect on the current expected human utility, not on the changes to the function. Add a term granting a large disutility for deaths, and this should do the trick.

Note the importance of the "current" expected utility in this setup; an AI will decide whether to industrialise a primitive tribe based on their current utility; if it does industrialise them, it will base its subsequent decisions on their new, industrialised utility.

Comment author: JustinShovelain 15 January 2010 05:23:27PM 3 points [-]

What keeps the AI from immediately changing itself to only care about the people's current utility function? That's a change with very high expected utility defined in terms of their current utility function and one with little tendency to change their current utility function.

Will you believe that a simple hack will work with lower confidence next time?

Comment author: steven0461 23 December 2009 09:17:05PM 1 point [-]

me too

Comment author: JustinShovelain 23 December 2009 09:24:08PM 1 point [-]

I'll be there.

Comment author: Psychohistorian 04 December 2009 05:15:27PM *  4 points [-]

There is a common intuition and feeling that our most fundamental goals may be uncertain in some sense.

In what follows, I will naturalistically explore the intuition of supergoal uncertainty.

These are entirely too representative of this post. I admit it's possible I lack adequate background, but this post seems incredibly dense and convoluted. I literally do not know what you're talking about, and I have enough external evidence of my reading comprehension to conclude that it's significantly the author's fault. The idea may be clear in your mind, but you need to spell it out in clear and simple terms if you want others to follow you. Defining "supergoal uncertainty" would be a necessary step, though it would still be well short of sufficient.

Comment author: JustinShovelain 05 December 2009 02:46:26AM *  0 points [-]

Hmm, darn. When I write I do have a tendency to see what ideas I meant to describe instead of seeing my actual exposition; I don't like grammar checking my writing until I've had some time to forget details, I read right over my errors unless I pay special attention.

I did have a three LWers look over the article before I sent it and got the general criticism that it was a bit obscure and dense but understandable and interesting. I was probably too ambitious in trying to include everything within one post though, length vs clarity tradeoff.

To address your points:

Have you not felt or encountered people who have the opinion that our life goals may be uncertain, something to have opinions about, and are valid targets for argument? Also, is not uncertainty of our most fundamental goals something we must consider and evaluate (explicitly or implicitly) in order to verify that an artificial intelligence is provably Friendly?

Elaborating on the second statement, when I used "naturalistically" I wished to invoke the idea that the exploration I was doing was similar to classifying animals before we had taxonomies, we look around with our senses (or imagination and inference in this case) and see what we observe and lay no claim to systematic search or analysis. In this context I did a kind of imagination limited shallow search process without trying to systematically relate the concepts (combinatorial explosion and I'm not yet sure how to condense and analyze supergoal uncertainty).

As to the third point, what I did in this article is allocate a name "supergoal uncertainty", roughly described it in the first paragraph and hopefully brought up the intuition, and then subsequently considered various definitions of "supergoal uncertainty" following from this intuition.

In retrospect, I probably errored on the clarity versus writing time trade-off and was perhaps biased in trying to get this uncomfortable writing task (I'm not a natural writer) off my plate so I can do other things.

Comment author: rhollerith_dot_com 04 December 2009 06:05:51PM *  0 points [-]

this post seems incredibly dense and convoluted. I literally do not know what you're talking about

That was not my experience. I understood everything in the first five paragraphs without having to reflect or even read a second time except that I did have to reflect for a few minutes on the last sentence of paragraph four. Although I am still less confident that I know what Justin intended there than I am with the other sentences, I am 72% confident I know. I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion. (Time constraints prevent me from reading the rest now.)

Defining "supergoal uncertainty" would be a necessary step

Oh, he did that. And the definition was quite clear to me on first reading, but then I have done a lot of math, and a lot of math in which I attempt my own definitions.

Comment author: JustinShovelain 05 December 2009 02:09:29AM *  0 points [-]

I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion.

You are correct, though I don't go as far as calling moral realism an illusion because of unknown unknowns (though I would be very surprised to find it isn't illusionary).

View more: Prev | Next