I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?
Closely related to your point is the paper, "The Epistemic Benefit of Transient Diversity"
It describes and models the costs and benefits of independent invention and transient disagreement.
Meetup: Bay Area: Sunday, March 7th, 7pm
Eliezer Yudkowsky, Alicorn, and Michael Vassar will be present.
Some other extra guests - Wei Dai, Stuart Armstrong, and Nick Tarleton - will be also be there, following our short Decision Theory mini-workshop.
I'm convinced of utilitarianism as the proper moral construct, but I don't think an AI should use a free-ranging utilitarianism, because it's just too dangerous. A relatively small calculation error, or a somewhat eccentric view of the future can lead to very bad outcomes indeed.
A really smart, powerful AI, it seems to me, should be constrained by rules of behavior (no wiping out humanity/no turning every channel into 24-7 porn/no putting everyone to work in the paperclip factory), The assumption that something very smart would necessarily reach correct utiltarian views seems facially false; it could assume that humans must think like it does, or assume that dogs generate more utility with less effort due to their easier ability to be happy, or decide that humans need more superintelligent machines in a great big hurry and should build them regardless of anything else.
And maybe it'd be right here or there. But maybe not. I think almost definitionally that FAI cannot be full-on, free-range utilitarian of any stripe. Am I wrong?
Why are you more concerned about something with unlimited ability to self reflect making a calculation error than about the above being a calculation error? The AI could implement the above if the calculation implicit in it is correct.
I believe you can strip the AI of any preferences towards human utility functions with a simple hack.
Every decision of the AI will have two effects on expected human utility: it will change it, and it will change the human utility functions.
Have the AI make its decisions only based on the effect on the current expected human utility, not on the changes to the function. Add a term granting a large disutility for deaths, and this should do the trick.
Note the importance of the "current" expected utility in this setup; an AI will decide whether to industrialise a primitive tribe based on their current utility; if it does industrialise them, it will base its subsequent decisions on their new, industrialised utility.
What keeps the AI from immediately changing itself to only care about the people's current utility function? That's a change with very high expected utility defined in terms of their current utility function and one with little tendency to change their current utility function.
Will you believe that a simple hack will work with lower confidence next time?
me too
I'll be there.
There is a common intuition and feeling that our most fundamental goals may be uncertain in some sense.
In what follows, I will naturalistically explore the intuition of supergoal uncertainty.
These are entirely too representative of this post. I admit it's possible I lack adequate background, but this post seems incredibly dense and convoluted. I literally do not know what you're talking about, and I have enough external evidence of my reading comprehension to conclude that it's significantly the author's fault. The idea may be clear in your mind, but you need to spell it out in clear and simple terms if you want others to follow you. Defining "supergoal uncertainty" would be a necessary step, though it would still be well short of sufficient.
Hmm, darn. When I write I do have a tendency to see what ideas I meant to describe instead of seeing my actual exposition; I don't like grammar checking my writing until I've had some time to forget details, I read right over my errors unless I pay special attention.
I did have a three LWers look over the article before I sent it and got the general criticism that it was a bit obscure and dense but understandable and interesting. I was probably too ambitious in trying to include everything within one post though, length vs clarity tradeoff.
To address your points:
Have you not felt or encountered people who have the opinion that our life goals may be uncertain, something to have opinions about, and are valid targets for argument? Also, is not uncertainty of our most fundamental goals something we must consider and evaluate (explicitly or implicitly) in order to verify that an artificial intelligence is provably Friendly?
Elaborating on the second statement, when I used "naturalistically" I wished to invoke the idea that the exploration I was doing was similar to classifying animals before we had taxonomies, we look around with our senses (or imagination and inference in this case) and see what we observe and lay no claim to systematic search or analysis. In this context I did a kind of imagination limited shallow search process without trying to systematically relate the concepts (combinatorial explosion and I'm not yet sure how to condense and analyze supergoal uncertainty).
As to the third point, what I did in this article is allocate a name "supergoal uncertainty", roughly described it in the first paragraph and hopefully brought up the intuition, and then subsequently considered various definitions of "supergoal uncertainty" following from this intuition.
In retrospect, I probably errored on the clarity versus writing time trade-off and was perhaps biased in trying to get this uncomfortable writing task (I'm not a natural writer) off my plate so I can do other things.
this post seems incredibly dense and convoluted. I literally do not know what you're talking about
That was not my experience. I understood everything in the first five paragraphs without having to reflect or even read a second time except that I did have to reflect for a few minutes on the last sentence of paragraph four. Although I am still less confident that I know what Justin intended there than I am with the other sentences, I am 72% confident I know. I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion. (Time constraints prevent me from reading the rest now.)
Defining "supergoal uncertainty" would be a necessary step
Oh, he did that. And the definition was quite clear to me on first reading, but then I have done a lot of math, and a lot of math in which I attempt my own definitions.
I think he meant that even if we are not religious, society tends to pull us into moral realism even though of course moral realism is an illusion.
You are correct, though I don't go as far as calling moral realism an illusion because of unknown unknowns (though I would be very surprised to find it isn't illusionary).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Poll: Do you have older siblings or are an only child?
karma balance