Comment author: Qiaochu_Yuan 28 November 2012 11:11:18PM 2 points [-]

Suppose the AI lives in a universe where infinitely many computations can be performed in finite time...

(I'm being mildly facetious here, but in the interest of casting the "coherently-thinkable" net widely.)

Comment author: jeremysalwen 29 November 2012 06:44:06AM 1 point [-]

I don't see how this changes the possible sense-data our AI could expect. Again, what's the difference between infinitely many computations being performed in finite time and only the computations numbered up to a point too large for the AI to query being calculated?

If you can give me an example of a universe for which the closest turing machine model will not give indistinguishable sense-data to the AI, then perhaps this conversation can progress.

Comment author: Qiaochu_Yuan 28 November 2012 06:41:11PM 3 points [-]

Suppose the AI lives in a universe with Turing oracles. Give it one.

Comment author: jeremysalwen 28 November 2012 09:39:26PM 1 point [-]

Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation.

Again, what distinguishes a "turing oracle" from a finite oracle with a bound well above the realizable size of a computer in the universe? They are indistinguishable hypotheses. Giving a turing complete AI a turing oracle doesn't make it capable of understanding anything more than turing complete models. The turing-transcendant part must be an integral part of the AI for it to have non-turing-complete hypotheses about the universe, and I have no idea what a turing-transcendant language looks like and even less of an idea of how to program in it.

In response to Causal Universes
Comment author: Eliezer_Yudkowsky 28 November 2012 06:11:44AM 4 points [-]

Meditation:

Suppose you needed to assign non-zero probability to any way things could conceivably turn out to be, given humanity's rather young and confused state - enumerate all the hypotheses a superintelligent AI should ever be able to arrive at, based on any sort of strange world it might find by observation of Time-Turners or stranger things. How would you enumerate the hypothesis space of all the coherently-thinkable worlds we could remotely maybe possibly be living in, including worlds with Stable Time Loops and even stranger features?

Comment author: jeremysalwen 28 November 2012 08:41:00AM 2 points [-]

Well I suppose starting with the assumption that my superintelligent AI is merely turing complete, I think that we can only say our AI has "hypothesis about the world" if it has a computable model of the world. Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation. Stable time loops seem computable through enumeration as you show in the post.

Now, if you claim that my assumption that the AI is computable is flawed, well then I give up. I truly have no idea how to program an AI more powerful than turing complete.

Comment author: jeremysalwen 27 November 2012 03:54:35PM 2 points [-]

If you don't spend two months salary on a diamond ring, it doesn't mean you don't love your Significant Other. ("De Beers: It's Just A Rock.") But conversely, if you're always reluctant to spend any money on your SO, and yet seem to have no emotional problems with spending $1000 on a flat-screen TV, then yes, this does say something about your relative values.

I disagree, or at least the way it's phrased is misleading. The obvious completion of the pattern is that you care more about a flat screen TV than your SO. But that's not a valid comparison. What it really says is that you care more about the flat-screen TV than anything else you could purchase for your SO for $1000. But for example, if you're poorer than your SO, you could believe that it's always better marginal investment to invest in your own happiness rather than theirs, but this says nothing about how much you value the relationship or the person. How much you "value" a person isn't on the same scale.

Comment author: Cakoluchiam 08 November 2012 06:10:21AM 4 points [-]

I strongly suspect that a lot of the members of LessWrong have had a non-internet IQ test and will have entered their scores on the census. Those who also took the extra credit internet test and entered their scores to that as well could serve as a sample group for us to make just such an analysis.

Granted, we are likely a biased sample of the population (I suspect a median of somewhere around 125 for both tests), but data is data.

Comment author: jeremysalwen 08 November 2012 06:26:08AM *  4 points [-]

From what I could read on the iqtest page, it seemed that they didn't do any correction for self-selection bias, but rather calculated scores as if they had a representative sample. Based on this I would guess that the internet IQ test will underestimate your score (p=0.7)

Comment author: Alicorn 03 November 2012 11:43:00PM 42 points [-]

I took the survey before it was cool.

Comment author: jeremysalwen 04 November 2012 04:18:15PM 3 points [-]

Luckily it will remain possible for everyone to do so for the foreseeable future.

Comment author: jeremysalwen 28 October 2012 04:46:55AM 1 point [-]

Thanks for this. Although I don't suffer from depression, the comments about meta-suffering really resonate with me. I think (this is unverified as of yet) that my life can be improved by getting rid of meta-suffering.

Comment author: phob 04 January 2011 05:53:47PM 9 points [-]

Would you pay one cent to prevent one googleplex of people from having a momentary eye irration?

Torture can be put on a money scale as well: many many countries use torture in war, but we don't spend huge amounts of money publicizing and shaming these people (which would reduce the amount of torture in the world).

In order to maximize the benefit of spending money, you must weigh sacred against unsacred.

In response to comment by phob on Circular Altruism
Comment author: jeremysalwen 12 October 2012 01:29:30AM 2 points [-]

I certainly wouldn't pay that cent if there was an option of preventing 50 years of torture using that cent. There's nothing to say that my utility function can't take values in the surreals.

Comment author: Eliezer_Yudkowsky 20 September 2012 06:16:21PM 19 points [-]

The geese and babies aren't sentient, wifi costs the provider very little, that's actually a different Chris Brown, and I take the money I get paid lobbying for SOPA and donate it to efficient charities!

(Sorry, couldn't resist when I saw the "babies" part.)

Comment author: jeremysalwen 21 September 2012 01:44:11AM 5 points [-]

I'll make sure to keep you away from my body if I ever enter a coma...

Comment author: [deleted] 20 September 2012 08:58:39PM 0 points [-]

The noise in my simulations quickly drown out any actual logic and the markov chain reaches its stable distribution.

In response to comment by [deleted] on Less Wrong Polls in Comments
Comment author: jeremysalwen 20 September 2012 09:06:51PM 0 points [-]

So what did you guess then?

View more: Prev | Next