Comment author: [deleted] 11 May 2013 04:34:20AM -1 points [-]

First recommendation is to get to the bottom of what question you are actually asking. What are you actually trying to do? Do the right thing? Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?

See disguised queries

(1) What are the necessary and sufficient properties for a thought to be pleasurable?

It feels good? Some pretty heavy neuroscience to say anything beyond that. Again, what are you going to do with the answer to this question. Ask that question instead.

Also note that "necessary and sufficient" is an obsolete model of concepts. See the human's guide to words.

(2) What are the characteristic mathematics of a painful thought?

What does this mean? How do I calculate exactly how much pain someone will experience if I punch them? Again, ask the real question.

(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?

Um. Why would you want to do that? Is this simply a hypothetical to see if we understand the concept?

It really depends on what aspect you are interested in; you could create "pleasure" and "pain" by hacking up some kind of simple reinforcement learner, and I suppose you could shoehorn that into a neural network if you really wanted to. But why?

Note that a simple reinforcement learner "experiences" "pain" and "pleasure" in some sense, but not in the morally relevant sense. You will find that the moral aspect is much more anthropomorphic and much more complex, I think.

(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness -- how would we do that?

I guess you could have a little "visceral happiness" meter that gets filled up in the right conditions, but this would a profound waste of AGI capability, and probably doesn't do what you actually wanted. What is it you actually want?

(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that?

Ask them? The same way we think we know for non-uploaded minds.

(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?

If I wanted to turn the universe into paperclips and meaningless crap, how would I do it? Why is your question interesting? Is this simply an excercise in learning how to fill the universe with X? You could pick a less confusing X.

I feel like you might be importing a few mistaken assumptions into this whole line of questioning. I recommend that you lurk more and read some of the stuff I linked.

And if you think certain questions aren't good, could you offer some you think are?

Good question:

How would a potentially powerful optimizing process have to be constructed to be provably capable of steering towards some coherent objective(s) over the long run and through self-modifications?

My first post; please be somewhat gentle. Thanks!

Downvote preventers get downvoted.

In response to comment by [deleted] on The mystery of pain and pleasure
Comment author: Angela 29 May 2014 02:41:49AM 0 points [-]

Even if it turns out that there is no rigorously definable one-dimensional measure of valence we still need to search for physical correlates to pleasure and pain and find approximate measures to use when resolving moral dilemmas.

Regarding the response to (6), why don't you want to maximise hedons? Having a rigorous definition of what you are trying to maximise needn't mean that what you are trying to maximise is arbitrary to you, and that pleasure is complex (or maybe it is simple but we don't understand it yet) does not imply that we don't want it.

Comment author: Viliam_Bur 20 September 2011 09:13:48AM 6 points [-]

Confirmed by experiment. :D

I've just left reading LW to eat 2 spoons of olive oil. For my taste receptors, it has a bad taste, but not a strong one. I certainly do not desire to eat more (and I am not afraid that this taste would ever asociate with anything I would voluntarily eat) and I had to drink water afterwards, but it was not that bad, and at the moment I write this comment the effect is over.

However, it vas very pleasant to leave the kitchen after the experiment. So here is another hypothesis: this diet works because it associates negative feelings with kitchen and eating in general.

Comment author: Angela 28 May 2014 02:02:00AM 0 points [-]

Then why does it also work for sugar water, which does not taste repulsive?

Comment author: Angela 08 April 2014 12:57:36AM 0 points [-]

Basic true/false test; reverse stupidity is not intelligence but rationalists tend to have fewer false beliefs. Taking the test upon entering the school would prevent the school from teaching to the test and the test could be scored on multiple areas of which one is a cunningly disguised synonym for rationality and the others are red herrings so that irrationalists have no incentive to lie on the test.

Comment author: Angela 22 January 2014 01:37:56PM -1 points [-]

I used to assume that the probability that heaven and hell existed was not zero, and I lived much of my teenage years by Pascal's Wager, partly because I was scared of what my parents would say if I stopped believing in God and partly because I had heard of miracle stories and not yet worked out how they had happened and I could not bear the thought of life being meaningless. Then I realised that if there were a non-zero probability of me having eternal life then the probability of me currently being in this first finite fraction of my life would be zero. Since I am currently on Earth the probability of eternal life must therefore be zero.

Comment author: Angela 21 January 2014 03:41:13PM 3 points [-]

The hard problem of consciousness will be solved within the next decade (60%).

Comment author: Angela 20 January 2014 08:09:36AM 0 points [-]

The likes of Pythagoras got attributed with performing miracles too. Although Mark, the first synoptic gospel to be written, is claimed to be an eyewitness account in Christian circles, it is likely that none of the gospels were. Paul was writing before then, but he never directly met Jesus, he only had a vision of Jesus. Also, Paul does not mention the empty tomb anywhere.

Comment author: Angela 16 January 2014 11:20:18PM *  0 points [-]

There is a paper on both IIT and causal density here:

Comment author: Angela 11 January 2014 06:16:23PM 0 points [-]

The amount of consciousness that a neural network S has is given by phi=MI(A^Hmax;B)+MI(A;B^Hmax), where {A,B} is the bipartition of S which minimises the right hand side, A^H_max is what A would be if all its inputs were replaced with maximum-entropy noise generators and MI(A,B)=H(A)+H(B)-H(AB) is the mutual information between A and B and H(A) is the entropy of A. 99.9%

Comment author: Angela 07 January 2014 08:05:25PM *  0 points [-]

Following the reasoning behind the Doomsday Argument, this particular thought is likely to be in the middle along the timeline of all thoughts experienced. This observation reduces the chances that in the future we will create AI that will experience many orders of magnitude more thoughts than those of all humans put together.

Comment author: Angela 06 January 2014 10:47:51PM 0 points [-]

If some means could be found to estimate phi for various species, a variable claimed by this paper to be a measure of "intensity of sentience", it would the relative value of the lives of different animals to be estimated and would help solve many moral dilemmas. Intensity of suffering as a result of a particular action would be expected to be proportionate to the intensity of sentience, however whilst mammals and birds (the groups which possess neocortex, the parts of the brain where consciousness is believed to occur) can be assumed to experience suffering when doing activities that decrease their evolutionary fitness (natural beauty etc. also determine pleasure and pain and are as yet poorly understood, but they are likely to be less significant in other species anyway, extrapolating from the differences in aesthetics from humans with high vs low IQ). However for AI it is much harder to determine what makes it happy or whether or not it enjoys dying, for which we will need to find a simple generalisable definition of suffering that can apply to all possible AI rather than our current concept which is more of an unrigorous Wittgensteinian family resemblance.

View more: Prev | Next