Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DanielLC 22 February 2014 06:01:26AM 0 points [-]
  1. The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg." (Words as Hidden Inferences.)

The alternative is worse. When I talk about a piano, I'm disguising the inference that an object with a certain outward appearance has a series of high tension cables running through it, each carefully set up with just the right tension so that the resonant frequency of each is 2^(1/12) times the last, with each positioned so that it can be struck with a hammer attached to each key, etc. But do you really expect me to say all that explicitly whenever I mention a piano?

Comment author: Polymeron 23 March 2015 07:48:13AM *  1 point [-]

That's why the rule says challengable inductive inference. If in the context of the discussion this is not obvious then maybe yes, but in almost every other instance it's fine to make these shortcuts, so long as you'reunderstood.

Comment author: wedrifid 04 February 2014 10:29:44AM 5 points [-]

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Dark side or not it is quite often valid. People who do not trust their ability to filter bullshit from knowledge should not defer to whatever powerful debater attempts to influence them.

It is no error to assign a low value to p(the conclusion expressed is valid | I find the argument convincing).

Comment author: Polymeron 04 February 2014 07:55:31PM 1 point [-]

No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.

Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).

Comment author: VAuroch 10 November 2013 09:24:09PM *  2 points [-]

Why do you consider

You're smart. You should go to college.

among these? It seems like the odd one out.

Comment author: Polymeron 04 February 2014 05:33:26AM *  4 points [-]

I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Comment author: PhilGoetz 06 September 2013 04:31:30AM *  3 points [-]

Regardless, the AI thinks in math. If you tell it to interpret your phonemes, rather than coding your meaning into its brain yourself, that doesn't mean you'll get an informal representation. You'll just get a formal one that's reconstructed by the AI itself.

It is misleading to say that an interpreted language is formal because the C compiler is formal. Existence proof: Human language. I presume you think the hardware that runs the human mind has a formal specification. That hardware runs the interpreter of human language. You could argue that English therefore is formal, and indeed it is, in exactly the sense that biology is formal because of physics: technically true, but misleading.

This will boil down to a semantic argument about what "formal" means. Now, I don't think that human minds--or computer programs--are "formal". A formal process is not Turing complete. Formalization means modeling a process so that you can predict or place bounds on its results without actually simulating it. That's what we mean by formal in practice. Formal systems are systems in which you can construct proofs. Turing-complete systems are ones where some things cannot be proven. If somebody talks about "formal methods" of programming, they don't mean programming with a language that has a formal definition. They mean programming in a way that lets you provably verify certain things about the program without running the program. The halting problem implies that for a programming language to allow you to verify even that the program will terminate, your language may no longer be Turing-complete.

Eliezer's approach to FAI is inherently formal in this sense, because he wants to be able to prove that an AI will or will not do certain things. That means he can't avail himself of the full computational complexity of whatever language he's programming in.

But I'm digressing from the more-important distinction, which is one of degree and of connotation. The words "formal system" always go along with computational systems that are extremely brittle, and that usually collapse completely with the introduction of a single mistake, such as a resolution theorem prover that can prove any falsehood if given one false belief. You may be able to argue your way around the semantics of "formal" to say this is not necessarily the case, but as a general principle, when designing a representational or computational system, fault-tolerance and robustness to noise are at odds with the simplicity of design and small number of interactions that make proving things easy and useful.

Comment author: Polymeron 06 September 2013 04:41:54AM 1 point [-]

Wouldn't this only be correct if similar hardware ran the software the same way? Human thinking is highly associative and variable, and as language is shared amongst many humans, it means that it doesn't, as such, have a fixed formal representation.

Comment author: AndHisHorse 12 August 2013 01:50:00AM 0 points [-]

Not necessarily. Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it.

Then, imagine someone whose entire knowledge of Chinese is the translation of the phrase: "Does my reply make sense in the context of this conversation?" This person takes an arbitrary amount of time, randomly combining phonemes and carrying out every conceivable conversation with an unlimited supply of Chinese speakers. (This is substantially more realistic if there are many people working in a field with fewer potential combinations than language). Through perhaps the least efficient trial and error possible, they learn to carry on a conversation by rote, keeping only those conversational threads which, through pure chance, make sense throughout the entire dialogue.

In neither of these human experts do we find a real understanding of Chinese. It could be said that the understandings of the domain experts combine to form one great understanding, but the inefficient trial-and-error GLUT manufacturers certainly do not have any understanding, merely memory.

Comment author: Polymeron 15 August 2013 04:28:33PM 0 points [-]

I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.

And this is before we mention the entirely plausible claim that the room-person system as a whole understands Chinese, even though neither of its two parts does. Any system you'll take apart to sufficient degrees will stop displaying the properties of the whole, so having us peer inside an electronic brain asking "but where does the intelligence/understanding reside?" misses the point entirely.

Comment author: TheOtherDave 20 May 2013 07:36:24PM 2 points [-]

at which point Searle declares that the stranger's half of the entire conversation up to that point has been nothing but the meaningless blatherings of a mindless machine, devoid entirely of any true understanding.

It is perhaps worth noting that Searle explicitly posits in that essay that the system is functioning as a Giant Lookup Table.

If faced with an actual GLUT Chinese Room... well, honestly, I'm more inclined to believe that I'm being spoofed than trust the evidence of my senses.

But leaving that aside, if faced with something I somehow am convinced is a GLUT Chinese Room, I have to rethink my whole notion of how complicated conversation actually is, and yeah, I would probably conclude that the entire conversation up to that point has been devoid entirely of any true understanding. (I would also have to rethink my grounds for believing that humans have true understanding.)

I don't expect that to happen, though.

Comment author: Polymeron 11 August 2013 08:35:34AM 1 point [-]

Wouldn't such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person's symbol-manipulation capabilities and the actual understanding represented by the GLUT.

You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.

Comment author: Baruta07 01 August 2013 01:16:03AM *  7 points [-]

My name is Alexander Baruta. People call me confident, knowledgeable, and confident. The truth behind those statements is that I'm inherently none of those. I hate stepping outside my comfort zone; as some of my friends would say "I hate it with a fiery burning passion to rival the sun". As a consequence I read a ton of books, I also have only had one good ELA teacher. My summer school teacher for ELA 30-1 (that's grade 12 English for those of you outside Canada), I'm in summer-school not because I failed the course but because I want to get ahead. I'm going into grade 12 with 3 core 30 level subjects completed. (although this is offset by the 2 additional science courses I want to take).

I spent most of my life in a christian environment and during that time I was one of those that thought humans could do no evil, Queue me being bullied. While nothing major, it was enough to set me thinking that what I'd been taught was wrong. I spent many years (Grades 6-9) trying to cope with my lack of faith, and as a result decided that the Bible was wrong. I don't know when I was introduced to LW, I think I found it simultaneously through TVTropes (warning may ruin your life), HPMOR, and Google. Since then I've been shocked at the attitude towards education in Alberta, for instance Bayes Theorem was on the Gr 11 curriculum six years ago and has since been removed along with the entirety of probability theorem to be replaced with what I like to call 1000 ways to manipulate boring graphs. I attend a self directed school.

One reason for the length of my explanation is that I want to expand my comfort zone, It is one of my major goals because I am an introvert, If any of you set any store by the Myers-Briggs test I am an INTJ. As a result of my introversion it is rather difficult for me to make any close friends, (although it is atrocious practice, I suspect that I am an ambivert: someone possesing both introverted and extroverted personality traits. When I am in a comfortable setting I am the life of the party. Other times I simply find the quietest corner and read). I am attempting to overcome my more extreme traits by taking up show-choir (not like glee at all I swear) and by being more open with myself and others. Due to pure chance I am going to become the holder of a Canadian-American duel citizenship and as a consequence able to attend a university in the states. Due to even more fortunate circumstances I am having at least a percentage of my tuition paid for by one of my relatives.

Some of my more socially unusual traits are things that are practically open secrets to my acquaintances. (Right now the mantra is I need to do this) I am a member of the Furry Fandom, and a Transhumanist (rather ironic really), as well as a wannabe philosopher. (Nietzsche, Wittgenstein, as well as some of the earlier ones such as Aristotle, not to be confused with Aristophanes) I thoroughly enjoy formal logic as well as psychology and neurology. I fear being judged, but I also welcome that judgement because I can use criticism to help me see beyond my tiny Classical perspective ingrained by my upbringing.

In terms of literature I enjoy mainly Sci-Fi/Fantasy, and science (Although I do enjoy a little romance on the side, Iff it is well written, and thanks to my wonderful ELA teacher I am learning to enjoy tragedy as well as comedy). My favorite authors include: Brandon Sanderson, Neil Gaimon, Issac Asimov, Terry Pratchett, Ian. M. Banks, Shakespeare (yes Shakespeare), G.K. Chesterton, and Patrick Rothfuss, As well as some specialized authors of Furry Fiction. (Will. A. Sanborn, Simon Barber, Phil Guesz [pronounced like Seuss was originally pronounced]) In some capacity I also study what rationalists consider to be the dark arts, as I participate (and do rather well in) a debate club. (8th overall in the beginner category). However in my defense I need the practice of arguing with someone else in a reasonably capable capacity because I tend to have trouble expressing myself on a day to day basis. (Although the scoring system is completely ridiculous, it marks people between 66-86 percent and does not seem capable of realizing that getting a 66 is the exact same thing as a 0...) Again sorry for the wall of text... it's a bad habit of mine to ramble. I just needed to finally tell someone these things.

~Actually, consider this as my: Lurker Status=Revoked post. I did one intro when I'd just joined and have been commenting on various things including me mixing up Aristotle and Aristophanes to amusing results.

Comment author: Polymeron 11 August 2013 07:34:58AM 0 points [-]

Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...

I'm interested to know, did you have any particular goal in mind posting this, or just making yourself generally known? If you need help or advice on any subject, be specific about it and I will be happy to assist (as will many others I'm sure).

Comment author: CCC 17 April 2013 09:51:02AM *  3 points [-]

Not to mention the efficiency of running a DB search on that...

Actually, with proper design, that can be made very quick and easy. You don't need to store the positions; you just need to store the states (win:black, win:white, draw - two bits per state).

The trick is, you store each win/loss state in a memory address equal to the 34-byte (or however long) binary number that describes the position in question. Checking a given state is then simply a memory retrieval from a known address.

Comment author: Polymeron 23 April 2013 06:55:27PM 1 point [-]

I suspect that with memory on the order of 10^70 bytes, that might involve additional complications; but you're correct, normally this cancels out the complexity problem.

Comment author: Kawoomba 17 April 2013 08:58:29AM 2 points [-]

First I found a clever way to minimize the amount of bits necessary to describe a board position. I think I hit 34 bytes per position or so, and I guess further optimization was possible.

Indeed, using a very straightforward Huffman encoding (1 bit for an for empty cell, 3 bits for pawns) you can get it down to 24 bytes for the board alone. Was an interesting puzzle.

Looking up "prior art" on the subject, you also need 2 bytes for things like "may castle", and other more obscure rules.

There's further optimizations you can do, but they are mostly for the average case, not the worst case.

Comment author: Polymeron 23 April 2013 06:50:01PM 2 points [-]

I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.

Comment author: Strange7 22 March 2013 06:52:38AM 0 points [-]

I would think it would be possible to cut the space of possible chess positions down quite a bit by only retaining those which can result from moves the AI would make, and legal moves an opponent could make in response. That is, when it becomes clear that a position is unwinnable, backtrack, and don't keep full notes on why it's unwinnable.

Comment author: Polymeron 17 April 2013 07:26:44AM 1 point [-]

This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.

Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...

View more: Next