Posts

Sorted by New

Wiki Contributions

Comments

Somehow I get the feeling that most commenters haven't yet read the actual paper. This would clear up a lot of the confusion.

I imagine that a sufficiently high-resolution model of human cognition et cetera would factor into sets of individual equations to calculate variables of interest. Similar to how Newtonian models of planetary motion do.

However, I don't see that the equations themselves on disk or in memory should pose a problem.

When we want to know particular predictions, we would have to instantiate these equations somehow--either by plugging in x=3 into F(x) or by evaluating a differential equation with x=3 as an initial condition. It would depend on the specifics of the person-model; however, if we calculated a sufficiently small subset of equations or refactored the equations into a sufficiently small set of new ones, we might be able to avoid the relevant moral dilemmas of calculating sentient things.

If on the other hand, for whatever we are interested in calculating, we couldn't do the above, then what about separating the calculation into small, safe sequentially-calculated units? Safe units meaning that individually none of them model anything cognizant. At the end if we sewed the states of those units together into a final state, could this still pose moral issues? This gets into Greg Egan-esque territory.

It's not clear that the previous two calculation strategies are always possible. However, another option might be to take care to always form questions so that the first strategy would be possible. For example, instead of asking whether a person will go left or right at a fork, maybe it's enough to ask a specific question about some brain center.

And now that I've written all that, I realize that the whole point of the predicates is in how to determine "sufficiently few" in "sufficiently few equations" or what kind of units are "safe units".

This isn't a satisfactory answer, but it seems like determining "safe calculations" would be tied to understanding the necessary conditions under which human cognition arises etc.

Also, carrying it a step further, I would argue that we need not just person predicates, but predicates that can circumvent modeling any kind of morally wrong situation. I wouldn't want to be accidentally burning kittens.

Please forgive this post here. There are some forgotten escaped characters and when I went to edit it, I ended up getting a separate post instead.

This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.

That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:

[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.

[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.

[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.

[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.

[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.

[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.

[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.

This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.

That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don't seem to currently be providing full text access. So, here's an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:

[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.

[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.

[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.

[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.

[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.

[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.

[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.

[This comment is no longer endorsed by its author]Reply

I don't know about you guys, but being wrong scares the crap out of me. Or to say it another way, I'll do whatever it takes to get it right. It's a recursive sort of doubt.

This post inpires wtf moments in my brain. Anyone here read Greg Egan's Permutation City?

Now I find myself asking "What is going on where I feel like there is this quantity time?" instead of "What is time?"

"If you took one world and extrapolated backward, you'd get many pasts. If you take the many worlds and extrapolate backward, all but one of the resulting pasts will cancel out! Quantum mechanics is time-symmetric."

My immediate thought when reading the above: when extrapolating forward do we get cancelation as well? Born probabilities?

I notice that I'm a bit confused, especially when reading, "programming a machine superintelligence to maximize pleasure." What would this mean?

It also seems like some arguments are going on in the comments about the definition of "like", "pleasure", "desire" etc. I'm tempted ask everyone to pull out the taboo game on these words here.

A helpful direction I see this article pointing toward, though, is how we personally evaluate an AI's behavior. Of course, by no means does an AI have to mimic human internal workings 100%, so taking the way we DO work, how can we use that knowledge to construct an AI that interacts with with us in good way?

I don't know what "good way" means here though. That's an excellent question/point I got from the article though.

You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn't maximize any utility function. If you're aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.

Load More