All of SoundLogic's Comments + Replies

Step one involves figuring out the fundamental laws of physics. Step two is input a complete description of your hardware. Step three is to construct a proof. I'm not sure how to order these in terms of difficulty.

0mavant
1-3-2 in descending order of difficulty

After a fair bit of thought, I don't. I don't think one can really categorize it as purely spur of the moment though-it lasted quite a while. Perhaps inducing a 'let the AI out of the box phase' would be a more accurate description.

I feel like the unpacking/packing biases ought to be something that should be easier to get around than some other biases. Fermi estimates do work (to some extent). I somewhat wonder if perhaps giving log probabilities would help more.

Oh, obviously there are causal reasons for why guess culture develops. If there wasn't, it wouldn't occur. I agree that having a social cost to denying a request can lead to this phenomenon, as your example clearly shows. I don't think that stops it from being silly.

I feel ask and tell culture are fairly similar in comparison to guess culture. Tell culture seems to me to be just ask culture a bit more explaining, which seems like a move in the right direction, balanced by time and energy constraints. Guess culture just seems rather silly.

7kalium
Guess culture acknowledges that there is a social cost to outright denying a request. A good example from Yvain's comment:

What I meant by this is the gravitational influence of N particles is the sum of the gravitational influences of each of the individual particles, and is therefore a strict function of their individual gravitational influences. If you give me any collection of particles, and tell me nothing except their gravitational fields, I can tell you the gravitational field of the system of particles. If you tell me the intelligence of each of your neurons (0), I cannot determine your intelligence.

I think the gatekeeper having to pay attention to the AI is very in the spirit of the experiment. In the real world, if you built an AI in a box and ignored it, then why build it in the first place?

1[anonymous]
For the experiment to work at all the Gatekeeper should read it yes, but having to think out clever responses or even typing full sentences all the time seems to stretch it. "I don´t want to talk about it" or simply silence could be allowed as a response as long as the Gatekeeper actually reads what the AI types.

I would be willing to consider it if you agreed to secrecy and raised it to 1000$. You would still have to talk to Tuxedage though.

0FourFire
Two can keep a secret... if one is dead.

I'm not completely sure. And I can't say much more than that without violating the rules. I would be more interested in how I feel in a week or so.

2FourFire
So, do you maintain your decision, or was it just a spur of the moment lapse of judgement?

A better mind than Tuxedage could almost certainly keep up the 'feel' of a flurry of arguments even with a schedule of breaks. I myself have had people feel irritated at me where even if I talk to them with days in between that I seem to do so. If I can do so accidentally I'm certain a superintelligence could do it reliably.

Also, I'm unsure of how much an AI could gather from a single human's text input. I know that I at least miss a lot of information that goes past me that I could in theory pick up.

An AI using timeless decision theory could easily compensate for having multiple AIs with unshared memory just by attempting to determine what the other AIs would say.

1Broolucks
At most, the number of bits contained in the text input, which is really not much, minus the number of bits non-AGI algorithms could identify and destroy (like speech patterns). The AI would also have to identify and throw out any fake information inserted into the stream (without knowing whether the majority of the information is real or fake). The exploitable information is going to be scarce and noisy even for a perfect AI. I don't think it could "easily" do it. For all intents and purposes, you are forcing the AI to figure out a path-invariant strategy against a party that has better information than them (that is to say, they know what's been played out, and the AI doesn't). This is considerably more difficult than the standard experiment. Now, sure, the AI is smart, but every wrench you throw in its way forces it to allocate more resources into fooling you. This, in itself, is problematic for the AI: every resource it allocates fooling you, it does not allocate doing useful work for you, so its maximal efficiency is technically lower than the maximal efficiency of an AI that wouldn't try concealing anything. This is a leak of information, so in a sense, if you are ranking a group of heterogenous AI by efficiency, it is slightly more likely that the better performing are the least deceitful. The more traps you spring, the more work the AI has to do in order to avoid them, and the greater the statistical gap will be. That doesn't tell you whether the least deceitful AI can be trusted, obviously, but it shows there are limits to what it can hide from you. Now, all I'm really saying is this: the AI's cleverness comes at a cost, namely that it has to cover its ass for every possible experimental framework you might subject it to. Since it is in a box, it only has the resources you provide, but on your own side, you have a theoretically unlimited amount of resources. Smarts can only defeat so much brute force, and by transferring hardware resources from the AI to

I have a fair bit of curiosity, which is why he said that in this case it probably wouldn't make a difference.

6Adele_L
Non-curious people seem unlikely to play this game, much less pay to play it!

Tuxedage's changes were pretty much just patches to fix a few holes as far as I can tell. I don't think they really made a difference.

I couldn't imagine either. But the evidence said there was such a thing, so I payed to find out. It was worth it.

I think your reasoning is mostly sound, but there are a few exceptions (which may or may not have happened in our game) that violate your assumptions.

I'm also somewhat curious how your techniques contrast with Tuxedage's. I hope to find out one day.

1FourFire
I too hope to find out one day, preferrably in the not too near future.

I was under the impression that a property x was emergent if it wasn't determined by the set of property states of the components. IE, gravity isn't emergent since the gravity generated by something is the addition of the gravity of the parts. Intelligence isn't, because even if I know the intelligence of each of your neurons, I don't know your intelligence.

2miosim
Observation of individual neurons doesn't indicate they have intelligence however doe it means that intelligence of a human brain is emergence phenomenon? Observation of individual atoms and molecules wouldn't revel any gravitation like properties either however we don't call that gravity emergence phenomena. Instead we argue that gravitation like properties of atoms and molecules are not observable. Could you conceder that we may grossly underestimate an "intelligent ability" of individual neurons?