"Exercise 7. How can you discover someone's goals? Assume you either cannot ask them, or would not trust their answers."
I'd guess that the best way is to observe what they actually do and figure out what goal they might be working towards from that.
That has the unfortunate consequence of automatically assuming that they're effective at reaching their goal, though. So you can't really use a goal that you've figured out in this way to estimate how good an agent is at getting to its goals.
And it has the unfortunate side effect of ascribing 'goals' to systems that are way too simple for that to be meaningful. You might as well say that the universe has a "goal" of maximizing its entropy. I'm not sure that it's meaningful to ascribe a "goal" to a thermostat - while it's a convenient way of describing what it does ("it wants to keep the temperature constant, that's all you need to know about it"), in a community of people who talk about AI I think it would require a bit more mental machinery before it could be said to have "goals".
"Now, I am not explaining control systems merely to explain control systems. The relevance to rationality is that they funnel reality into a narrow path in configuration space by entirely arational means, and thus constitute a proof by example that this is possible."
I don't think you needed control systems to show this. Gravity itself is as much of a 'control system' - it minimizes the potential energy of the system! Heck, if you're doing that, lots of laws of physics fit that definition - they narrow down the set of possible realities...
" This must raise the question, how much of the neural functioning of a living organism, human or lesser, operates by similar means? "
So, I'm still not sure what you mean by 'similar means'.
We know the broad overview of how brains work - sensory neurons get triggered, they trigger other neurons, and through some web of complex things motor neurons eventually get triggered to give outputs. The stuff in the middle is hard; some of it can be described as "memory" (patterns that somehow represent past inputs), some can be represented various other abstractions. Control systems are probably good ways of interpreting a lot of combinations of neurons, and some have been brought up here. It seems unlikely that they would capture all of them - but if you stretch the analogy enough, perhaps it can.
"And how much of the functioning of an artificial organism must be designed to use these means? "
Must? I'd guess absolutely none. The way you have described them, control systems are not basic - for 'future perception' to truly determine current actions, would break causality. So it's certainly possible to describe/build an artificial organism without using control systems, though it seems like it would be pretty inconvenient and pointlessly harder than an already impossible problem given how useful you're describing them to be.
The aliens question was interesting to think about.
I realized that if I put anything other than zero for 'probability of aliens existing within our galaxy', then it seems like it would make little sense to put anything other than 100 for 'observable universe', given how many galaxies there are! Unless our galaxy is somehow special...
And if you're interested in what groups do rather than how they do it, you're in a vast minority. Good for you - you don't have to join a church, even a rationalist one! Nobody's making you!
But people have emotions. It's not 'rational' to ignore this. As Eliezer says, and clarifies in the next post, rationalism [is/is correlated with/causes] winning. If the religious get to have a nice community and we have to do without, then we lose.
Yes, I would like to join a community of people very much like a church, but without all the religious nonsense. I'm pretty sure I'm not alone in this.
Champaign, IL
I don't think that it's reasonable to expect that secret criteria would stay secret once such a test would actually be used for anything. Sure, it could be kept a secret if there were a dozen people taking the test, of which the four who passed would get admitted to an exclusive club.
If there were ten thousand people taking the test, a thousand of which passed, I'd bet there'd be at least one who accidentally leaks it on the internet, from where it would immediately become public knowledge. (And at least a dozen who would willingly give up the answer if offered money for it, as would happen if there were anything at stake in this test.) It might work if such a test is obscure enough or not widely used, but not if it was used for anything that mattered to the test-takers and was open to many.
I suspect that efficiency is not necessarily the reason that many dislike PUA techniques. Personally, I don't particularly doubt that there are patterns for how women react to men (and vice versa), and that these can be used to have more sex. On the other hand, spiking people's drinks or getting them drunk can also be used for the same purpose, and that's commonly known as rape.
Sure, there are ways to hack into people's minds to get them to do what you want. The fact that they exist doesn't make them ethically acceptable.
Now, I don't know whether PUA methods are or aren't - but the fact that "the attitude that your partner should be respected" is seen as a negative thing seems to be pointing pretty clearly towards the no direction.