Not for somebody unfamiliar with the details of the rules of how to play. I would have guessed cricket.
In fact, thinking about EY's definition - I think it fits better (for me) because I would be able to recognise a game of baseball after only watching a single game... even if I didn't have anybody around to explain the rules to me.
But that's not the rationalist's version of the game. The rationalist's game involves seeing at a lower level of detail. Not thinking up synonyms and keywords that weren't on the card.
Yeah, but when playing actual Taboo "rational agents should WIN" (Yudkowsky, E.) and therefore favour "nine innings and three outs" over your definition (which would also cover some related-but-different games such as rounders, I think). I suspect something like "Babe Ruth" would in fact lead to a quicker win.
None of which is relevant to your actual point, which I think a very good one. I don't think the tool is all that nonstandard; e.g., it's closely related to the positivist/verificationist idea that a statement has meaning only if it can be paraphrased in terms of directly (ha!) observable stuff.
Good point, especially since the most common words become devalued or politicized ("surge", "evil", "terror" &c.) but...
The existence of this game surprised me, when I discovered it. Why wouldn't you just say "An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions?"
So what was your score?
(Did you cut your enemy?)
Sounds interesting. We must now verify if it works for useful questions.
Could someone explain what FAI is without using the words "Friendly", or any synonyms?
Easy PK. An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.
In one class in high school, we were supposed to make our classmates guess a word using hand gestures. I drew letters in the air.
This strategy can't be that nonstandard, as it is the strategy I've always used when a conversation gets stuck on some word. But now that I think about it, people usually aren't that interesting in following my lead in this direction, so it isn't very common either.
An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration.
Then declaring the intention to create such a thing takes for granted that there are shared strong attractors.
What was that about the hidden assumptions in words, again?
Three separate comments here:
1) Eliezer_Yudkowsky: Why wouldn't you just say "An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions"?
To phrase brent's objection a little more precisely: Because people don't normally think of baseball in those terms, and you're constrained on time, so you have to say something that makes them think of baseball quickly. Tom_Crispin's idea is much more effective at that. Or were you just trying to criticize baseball fans for not see...
The game is not over! Michael Vassar said: "[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration."
For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.
Whats are "shared strong attractors"? You cant use the words "shared", "strong", "attractor" or any synonyms.
What's a "high-level reflective aspiration"? You can't use the words "high-...
I'd have to agree with PK's protest. This isn't Hasbro's version of the game; you're not trying to help someone figure out that you're talking about a "Friendly AI" without using five words written on a card.
Oh, and there's no time limit.
Eliezer seems to want us to strike out some category of words from our vocabulary, but the category is not well defined. Perhaps a meta-Taboo game is necessary to find out what the heck we are supposed to be doing without. I'm not too bothered, grunting and pointing are reasonably effective ways of communicating. Who needs words ?
The hemlock example demonstrates tcpkac's point well. How do you decide to conclude that Albert and Barry expect different results from the same action? To me, it seems obvious that they should taboo the word hemlock, and notice that one correctly expects Socrates to die from a drink made from an herb in the carrot family, and the other correctly expects Socrates to be unharmed by tea made from a coniferous tree. But it's not clear why Eliezer ought to have the knowledge needed to choose to taboo the word hemlock.
Y'know, the 'Taboo game' seems like an effective way to improve the clarity of meaning for individual words - if you have enough clear and precise words to describe those particular words in the first place.
If there isn't a threshold number of agreed-upon meanings, the language doesn't have enough power for Taboo to work. You can't improve one word without already having a suite of sufficiently-good words to work with.
The game can keep a language system above that minimum threshold, but can't be used to bootstrap the system above that threshold. If you're just starting out, you need to use different methods.
Julian Morrison said: "FAI is: a search amongst potentials which will find the reality in which humans best prosper." What is "prospering best"? You can't use "prospering", "best" or any synonyms.
Let's use the Taboo method to figure out FAI.
I'll just chime in at this point to note that PK's application of the technique is exactly correct.
^^^^Thank you. However merely putting the technique into the "toolbox" and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)
Maybe the comment section of a blog isn't even the best medium for playing taboo. I don't know. I'm brainstorming of productive ways/mediums to play taboo(assuming the method itself leads to something productive).
Suppose you learn of a powerful way to steer the future into any target you choose as long as that target is specified in the language of mathematics or with the precision needed to write a computer program. What target to choose? One careful and thoughtful choice would go as follows. I do not have a high degree of confidence that I know how to choose wisely, but (at least until I become aware of the existence of nonhuman intelligent beings) I do know that if there exists wisdom enough to choose wisely, that wisdom resides among the humans. So, I will ...
Hollerith: I do not have a high degree of confidence that I know how to choose wisely, but (at least until I become aware of the existence of nonhuman intelligent beings) I do know that if there exists wisdom enough to choose wisely, that wisdom resides among the humans. So, I will choose to steer the future into a possible world in which a vast amount of rational attention is focused on the humans...
and lo the protean opaque single thing was taken out of one box and put into another
PK: Thank you. However merely putting the technique into the "toolb...
@Richard Hollerith: Skipping all the introductory stuff to the part which tries to define FAI(I think), I see two parts. Richard Hollerith said:
"This vast inquiry[of the AI] will ask not only what future the humans would create if the humans have the luxury of [a)] avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also [b)] what future would be created by whatever intelligent agents ("choosers") the humans would create for the purpose of creating the future if the humans had the lux...
Who do you think invented Friendly AI?
You haven't invented Friendly AI. You've created a name for a concept you can only vaguely describe and cannot define operationally.
Who do you think taught you the technique?
Isn't just a bit presumptuous to conclude you're the first to teach the technique?
I'm not trying to under/over/middle-estimate you, only theories which you publicly write about. Sometimes I'm a real meanie with theories, shoving hot pokers into to them and all sorts of other nasty things. To me theories have no rights.
I know. But come on, you don't think the thought would ever have occurred to me, "I wonder if I can define Friendly AI without saying 'Friendly'?" It's not as if I invented the phrase first and only then thought to ask myself what it meant.
Moral, right, correct, wise, are all fine words for humans to use, but y...
re PK's (b): if we're tabooïng choose, perhaps we should replace it with a description of subjective expected utility theory. Taboo utility--and I find myself clueless.
My precis of CEV is not very good. If I want to participate in the public discourse about it, I need to get better at writing descriptions of it that a backer of CEV would concede are full and fair. It is probably easier to do that to SimplifiedFAI than to do it to the CEV document, so I'll put that on my list of things to do when I have time.
Taboo utility--and I find myself clueless.
Consider the following optimization target: the future that would have come to pass if the optimization process did not come into existence -- which we will call the "naive future" -- modified in the following way.
The optimization process extrapolates the naive future until it can extrapolate no more or that future leads to the loss of Earth-originating civilization or a Republican presidential administration. In the latter case (loss of civilization or Republican win) rewind the extrapolation to the lat...
Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn't magic, it won't let you cross a gap of months in an hour.
Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I'm still in "Hmmmm...scratches chin" mode. I will need to see said explanation before I will be in "Whoa! This is really cool!" mode.
Really? That's your concept of how to steer the future of...
One of the more obvious associations of "Friendly AI" is the concept of "User Friendly", in which a process, set of instructions, or device is structured in such a way that most users will be able to get the results they want intuitively and easily. With the idea of "user friendly", we at least have real-life examples we can look at to better understand the concept.
When some people decided they wanted to identify the perfect voting method, they drew up a list of the desirable traits they wanted such a method to have in such a...
I think comment moderation is clearly desirable on this blog (to keep busy smart thoughtful people reading the comments) and I have absolutely no reason to believe that the moderators of this blog have done a bad job in any way, but it would be better if there were a way for a sufficiently-motivated participant to review the decisions of the moderators. The fact that most blogs hosting serious public discourse do not provide a way is an example of how bad mainstream blogging software is.
The details of Youtube's way for a participant to review moderation d...
The idea that rational inquiry requires clear and precise definitions is hardly a new one. And the idea that definitions of a word cannot simply reuse the word or its synonyms isn't new either - unless my elementary-school English teachers all spontaneously came up with it.
This is part of why people turn to dictionaries - sure, they only record usages, but they tend to have high-quality definitions that are difficult to match in quality without lots of effort.
We can only use this "technique" to convey concepts we already possess to people who la...
Caledonian,
they tend to have high-quality definitions that are difficult to match in quality without lots of effort.
All well and good, and useful in their way. But still just a list of synonyms and definitions. You can describe 'tree' using other English words any which way you want, you're still only accounting for a miniscule fraction of the possible minds the universe could contain. You're still not really much closer to universal conveyance of the concept. Copy the OED out in a hundred languages; decent step in the right direction. To take the next big...
Oh, yes, I forgot to mention one of the most important rules in Rationalist Taboo:
You can't Taboo math.
Stating an equation is always allowed.
But of course, you can still point to an element of a mathematical formula and ask "What does this term apply to? Answer without saying..."
...Albert says that people have "free will". Barry says that people don't have "free will". Well, that will certainly generate an apparent conflict. Most philosophers would advise Albert and Barry to try to define exactly what they mean by "free will", on which topic they will certainly be able to discourse at great length. I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase "free will" at all. (If you want to try this at home, you should also
The main restriction, of course, is time in live conversation. Of course, I'm sure time to process these thoughts decreases as you have more....
Consider a hypothetical debate between two decision theorists who happen to be Taboo fans:
A: It's rational to two-box in Newcomb's problem.
B: No, one-boxing is rational.
A: Let's taboo "rational" and replace it with math instead. What I meant was that two-boxing is what CDT recommends.
B: Oh, what I meant was that one-boxing is what EDT recommends.
A: Great, it looks like we don't disagree after all!
What did these two Taboo'ers do wrong, exactly?
This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one.
Yudkowsky, 2008.
...To bring out the role of pointlessness, it is worth noting that when faced with a potentially verbal dispute we often ask: what turns on this?
...
Typically, a broadly verbal dispute is one that can be resolved by attending to language and resolving metalinguistic differences over meaning. For example, these disputes can sometimes be resolved by settling the facts about the meaning of key terms in our community...[
In fiction writing, this is known as Show Don't Tell. Instead of using all-encompassing, succing abstractions, to present the reader with predigested conclusion (Character X is a jerk, Place Y is scary, Character Z is afraid), it is encouraged to show the reader evidence of X's jerkiness, Y's scariness, or Z's fear, and leave it to them to infer from said evidence what is going on. Effectively, what one is doing is tabooing judgments and subjective perceptions such as "jerky", "scary" or "afraid", and replace them with a list of jerky actions, scary traits, and symptoms of fear.
I've first read this about two years ago and it has been an invaluable tool. I'm sure it has saved countless hours of pointless arguments around the world.
When I realise that an inconsistency in how we interpret a specific word is a problem in a certain argument and apply this tool, it instantly transforms arguments which actually are about the meaning of the word to make them a lot more productive (it turns out it can be unobvious that the actual disagreement is about what a specific word means). In other cases it just helps get back on the right track in...
I think one word that needs to be taboo-ed, especially in the context of being a victim to media advertising, is the word "FREE!!!" (Exclamation marks may or may not be present).
Replacing a word with a long definition is, in a way, like programming a computer and writing code inline instead of using a subroutine.
Do it too much and your program becomes impossible to understand.
If I were to say "I'll be out of work tomorrow because I'm going to an artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions", people will look at me as though I'm nuts. And not just because people don't talk like that--but because there's a reason why people don't tal...
This method of elimination can be useful to both verbal disagreements (where the real debate is only over terminology) non-verbal disagreements (where parties fundamentally disagree about things themselves, and not just labels). Besides separating the two to clarify the real disagreement, it can also be usefully applied to one’s own internal dialogue.
However, how do we know when to apply this technique? With external debates, it is easy enough to suspect when a disagreement is only verbal, or when the terms argued over have constituent parts. These might b...
1. entity that regularly makes the acts of changing the owner of object of value from the other entities to self without providing any signal according to that the given other entity could have any reason to hypothesize such change in short term time horizon of its perceptual and cognitive activity.
2. relatively common state of a natural system of currently detecting an internal insufficiency of specific sources interpreting it as the threat to its existence or proper functioning and causing it to perform an attempt to compensate for it and deflect such th...
Following the suggestion here invokes such a pronounced and immediate effect on my mental state. In the free-will example, it’s as if my mind is stunned into silence. If I cannot rephrase what I’m thinking, can I really know I’m thinking it? Or disturbingly, have I done any thinking at all?
In either case, removing these words forces the thought process to be redone. It is easy to speak in the way we’ve always spoke, and to think like we’ve always thought. This is the path of least resistance, becoming increasingly frictionless each...
I came to lesswrong because of a The Noncentral Fallacy, and have been reading eagerly. I had similar thoughts, maybe from different angles, for 20 years or so, but I never managed to write them clearly and eloquently.
My take was that words have connotations, i.e. some emotional baggage that comes whenever they are uttered. E.g. "Democracy" is Good, and when arguing about changes to some policies, each side says their suggestion is more democratic, and in order to prove it they go at length to define what democracy is, and the argument turns to be about th...
POV: Definition of intelligence
“. . . in its lowest terms intelligence is present where the individual animal, or human being, is aware, however dimly, of the relevance of his behaviour to an objective. Many definitions of what is indefinable have been attempted by psychologists, of which the least unsatisfactory are 1. the capacity to meet novel situations, or to learn to do so, by new adaptive responses and 2. the ability to perform tests or tasks, involving the grasping of relationships, the degree of intelligence being proportional to the complexity, or the abstractness, or both, of the relationship.” J. Drever
Have you heard of the language Toki Pona? It forces you to taboo your words by virtue of the language only containing 120-ish words. It was invented by a linguist named Sonja Lang who was depressed and wanted a language that would force her to break her thoughts into manageable pieces. I'm fluent in it and can confirm that speaking it can get rid of certain confusions like this, but it also creates other, different confusions. [mortal, not-feathers, biped] has 3 confusions in it while [human] only has 1. Tabooing a word splits the confusion into 3 pieces. ...
In the game Taboo (by Hasbro), the objective is for a player to have their partner guess a word written on a card, without using that word or five additional words listed on the card. For example, you might have to get your partner to say "baseball" without using the words "sport", "bat", "hit", "pitch", "base" or of course "baseball".
As soon as I see a problem like that, I at once think, "An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions." It might not be the most efficient strategy to convey the word 'baseball' under the stated rules - that might be, "It's what the Yankees play" - but the general skill of blanking a word out of my mind was one I'd practiced for years, albeit with a different purpose.
Yesterday we saw how replacing terms with definitions could reveal the empirical unproductivity of the classical Aristotelian syllogism. All humans are mortal (and also, apparently, featherless bipeds); Socrates is human; therefore Socrates is mortal. When we replace the word 'human' by its apparent definition, the following underlying reasoning is revealed:
But the principle of replacing words by definitions applies much more broadly:
Clearly, since one says "sound" and one says "not sound", we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If "acoustic vibrations" came into dispute, we would just play Taboo again and say "pressure waves in a material medium"; if necessary we would play Taboo again on the word "wave" and replace it with the wave equation. (Play Taboo on "auditory experience" and you get "That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes...")
But suppose, on the other hand, that Albert and Barry were to have the argument:
Now Albert and Barry have a substantive clash of expectations; a difference in what they anticipate seeing after Socrates drinks hemlock. But they might not notice this, if they happened to use the same word "human" for their different concepts.
You get a very different picture of what people agree or disagree about, depending on whether you take a label's-eye-view (Albert says "sound" and Barry says "not sound", so they must disagree) or taking the test's-eye-view (Albert's membership test is acoustic vibrations, Barry's is auditory experience).
Get together a pack of soi-disant futurists and ask them if they believe we'll have Artificial Intelligence in thirty years, and I would guess that at least half of them will say yes. If you leave it at that, they'll shake hands and congratulate themselves on their consensus. But make the term "Artificial Intelligence" taboo, and ask them to describe what they expect to see, without ever using words like "computers" or "think", and you might find quite a conflict of expectations hiding under that featureless standard word. Likewise that other term. And see also Shane Legg's compilation of 71 definitions of "intelligence".
The illusion of unity across religions can be dispelled by making the term "God" taboo, and asking them to say what it is they believe in; or making the word "faith" taboo, and asking them why they believe it. Though mostly they won't be able to answer at all, because it is mostly profession in the first place, and you cannot cognitively zoom in on an audio recording.
When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all. Or any of their short synonyms. And be careful not to let yourself invent a new word to use instead. Describe outward observables and interior mechanisms; don't use a single handle, whatever that handle may be.
Albert says that people have "free will". Barry says that people don't have "free will". Well, that will certainly generate an apparent conflict. Most philosophers would advise Albert and Barry to try to define exactly what they mean by "free will", on which topic they will certainly be able to discourse at great length. I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase "free will" at all. (If you want to try this at home, you should also avoid the words "choose", "act", "decide", "determined", "responsible", or any of their synonyms.)
This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one. It also requires more effort to use; you get what you pay for.