Comment author: Tom_McCabe2 31 July 2008 10:27:00PM 8 points [-]

"Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. "Ah," says the captain, "this must be the lever that makes the ship dematerialize!" So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize."

This type of thing is known to happen in real life, when technology gaps are so large that people have no idea what generates the magic. See http://en.wikipedia.org/wiki/Cargo_cult.

In response to The Meaning of Right
Comment author: Tom_McCabe2 29 July 2008 05:59:33AM 6 points [-]

"You will find yourself saying, "If I wanted to kill someone - even if I thought it was right to kill someone - that wouldn't make it right." Why? Because what is right is a huge computational property- an abstract computation - not tied to the state of anyone's brain, including your own brain."

Coherent Extrapolated Volition (or any roughly similar system) protects against this failure for any specific human, but not in general. Eg., suppose that you use various lawmaking processes to approximate Right(x), and then one person tries to decide independently that Right(Murder) > 0. You can detect the mismatch between the person's actions and Right(x) by checking against the approximation (the legal code) and finding that murder is wrong. In the limit of the approximation, you can detect even mismatches that people at the time wouldn't notice (eg., slavery). CEV also protects against specific kinds of group failures, eg., convince everybody that the Christian God exists and that the Bible is literally accurate, and CEV will correct for it by replacing the false belief of "God is real" with the true belief of "God is imaginary", and then extrapolating the consequences.

However, CEV can't protect against features of human cognitive architecture that are consistent under reflection, factual accuracy, etc. Suppose that, tomorrow, you used magical powers to rewrite large portions of everyone's brain. You would expect that people now take actions with lower values of Right(x) than they previously did. But, now, there's no way to determine the value of anything under Right(x) as we currently understand it. You can't use previous records (these have all been changed, by act of magic), and you can't use human intuition (as it too has been changed). So while the external Right(x) still exists somewhere out in thingspace, it's a moot point, as nobody can access it. This wouldn't work for, say, arithmetic, as people would rapidly discover that assuming 2 + 2 = 5 in engineering calculations makes bridges fall down.

In response to The Meaning of Right
Comment author: Tom_McCabe2 29 July 2008 05:09:32AM 1 point [-]

Wow, there's a lot of ground to cover. For everyone who hasn't read Eliezer's previous writings, he talks about something very similar in Creating Friendly Artificial Intelligence, all the way back in 2001 (link = http://www.singinst.org/upload/CFAI/design/structure/external.html). With reference to Andy Wood's comment:

"What claim could any person or group have to landing closer to the one-place function?"

Next obvious question: For purposes of Friendly AI, and for correcting mistaken intuitions, how do we approximate the rightness function? How do we determine whether A(x) or B(x) is a closer approximation to Right(x)?

Next obvious answer: The rightness function can be computed by computing humanity's Coherent Extrapolated Volition, written about by Eliezer in 2004 (http://www.singinst.org/upload/CEV.html). The closer a given algorithm comes to humanity's CEV, the closer it should come to Right(x).

Note: I did *not* think of CFAI when I read Eliezer's previous post, although I did think of CEV as a candidate for morality's content. CFAI refers to the supergoals of agents in general, while all the previous posts referred to a tangle of stuff surrounding classic philosophical ideas of morality, so I didn't connect the dots.

In response to Touching the Old
Comment author: Tom_McCabe2 20 July 2008 09:27:59AM 0 points [-]

"I don't think I've ever touched anything that has endured in the world for longer than that church tower."

Nitpick: This probably holds true for things of human construction, but there are obviously rocks, bits of dirt, etc. that have endured for far longer than a thousand years.

Comment author: Tom_McCabe2 14 June 2008 04:13:49PM 1 point [-]

"What concrete state of the world - which quarks in which positions - corresponds to "There are three apples on the table, and there could be four apples on the table"? Having trouble answering that? Next, say how that world-state is different from "There are three apples on the table, and there couldn't be four apples on the table.""

For the former: An ordinary kitchen table with three apples on it. For the latter: An ordinary kitchen table with three apples on it, wired to a pressure-sensitive detonator that will set off 10 kg of C4 if any more weight is added onto the table.

"But "I could have a heart attack at any time" and "I could have a heart attack any time I wanted to" are nonetheless not exactly the same usage of could, though they are confusingly similar."

They both refer to possible consequences if the initial states were changed, while still obeying a set of constraints. The first refers to a change in initial external states ("there's a clot in the artery"/"there's not a clot in the artery"), while the second refers to a change in initial internal states ("my mind activates the induce-heart-attack nerve signal"/"my mind doesn't activate the induce-heart-attack nerve signal"). Note that "could" only makes sense if the initial conditions are limited to a pre-defined subset. For the above apple-table example, in the second case, you would say that the statement "there could be four apples on the table" is false, but you have to assume that the range of initial states the "could" refers to don't refer to states in which the detonator is disabled. For the heart-attack example, you have to exclude initial states in which the Mad Scientist Doctor (tm) snuck in in the middle of the night and wired up a deliberation-based heart-attack-inducer.

Comment author: Tom_McCabe2 13 June 2008 11:48:43PM 0 points [-]

"But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before."

I must admit that I still don't really understand this. It seems to violate what we usually mean by moral responsibility.

"When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head."

This is, I think, because humans are a tiny subset of all possible computers, and not because there's a qualitative difference between predicting and creating. It is, for instance, possible to look at a variety of factorial algorithms, and rearrange them to predictably compute triangular numbers. This, of course, doesn't mean that you can look at an arbitrary algorithm and determine whether it computes triangular numbers. I conjecture that, in the general case, it's impossible to predict the output of an arbitrary Turing machine at any point along its computation without doing a calculation at least as long as the calculations the original Turing machine does. Hence, predicting the output of a mind-in-general would require at least as much computing power as running the mind-in-general.

Incidentally, I think that there's a selection bias at work here due to our limited technology. Since we don't yet know how to copy or create a human, all of the predictions about humans that we come up with are, by necessity, easier than creating a human. However, for most predictions on most minds, the reverse should be true. Taking Michael Vassar and creating an electronic copy (uploading), or creating a human from scratch with a set of prespecified characteristics, are both technologically feasible with tools we know how to build. Creating a quantum simulation of Michael Vassar or a generic human to predict their behavior would be utterly beyond the processing power of any classical computer.

Comment author: Tom_McCabe2 05 June 2008 03:51:21AM 3 points [-]

"One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?"

It's nonsensical. The space that we see is just an artifact of a lower level of reality. See http://www.acceleratingfuture.com/tom/?p=124.

"And you should always take joy in discovery, as long as you personally don't know a thing."

I generally give independent, replicated discoveries the same "joy status" (if that makes sense) as first-time-in-this-branch discoveries. However, you should take a hit when you're just rereading someone else's work, which isn't as challenging, or as fun.

In response to Class Project
Comment author: Tom_McCabe2 31 May 2008 11:52:37AM 0 points [-]

I really, really hope that you aren't going to try and publish a theory of quantum gravity, for practical reasons; even if it's more elegant than every other theory yet proposed, the lack of experimental evidence and your lack of credentials will make you seem like a crackpot.

Comment author: Tom_McCabe2 24 May 2008 03:37:42AM -1 points [-]

First of all, to Eliezer: Great post, but I think you'll need a few more examples of how stupid chimps are compared to VIs and how stupid Einsteins are compared to Jupiter Brains to convince most of the audience.

"Maybe he felt that the difference between Einstein and a village idiot was larger than between a village idiot and a chimp. Chimps can be pretty clever."

We see chimps as clever because we have very low expectations of animal intelligence. If a chimp were clever in human terms, it would be able to compete with humans in at least some areas, which is clearly silly. How well would an adult chimp do, if he was teleported into a five-year-old human's body and thrown into kindergarten?

"But I don't buy the idea of intelligence as a scalar value."

Intelligence is obviously not a scalar, but there does seem to be a scalar component of intelligence, at least when dealing with humans. It has long been established that intelligence tests strongly correlate with each other, forming a single scalar known as Spearman's g (http://en.wikipedia.org/wiki/General_intelligence_factor), which correlates with income, education, etc.

"2) you're handwaving away deep problems of knowledge and data processing by attributing magical thought powers to your AI."

Yes. If you have a way to solve those problems, and it's formal and comprehensive enough to be published in a reputable journal, I will pay you $1,000. Other people on OB will probably pay you much more. Until then, we do the best we can.

"as opposed to simply stating that it could obviously do those things because it's a superintelligence."

See the previous post at http://lesswrong.com/lw/qk/that_alien_message/ for what simple overclocking can do.

"We haven't even established how to measure most aspects of cognitive function - one of the few things we know about how our brains work is that we don't possess tools to measure most of the things it does."

Er, yes, we do, actually. See http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_know/.

"Some people can do it without much effort at all, and not all of them are autistic, so you can't just say that they've repurposed part of their brain for arithmetic."

Since when is autism necessary for brain repurposing? Autism specifically refers to difficulty in social interaction and communication. Savantism is actually an excellent example of what we could do with the brain if it worked efficiently.

"By the way, when the best introduction to a supposedly academic field is works of science fiction, it sets off alarm bells in my head. I know that some of the best ideas come from sci-fi and yada, yada, but just throwing that out there."

Sci-fi is useful for introducing the reader to the idea that there are possibilities for civilization other than 20th-century Earth. It's not meant to be technical material.

"But I'm skeptical that this uniformity extends to system II. The system II abilities of the best rationalists of today may depend significantly on their having learned a set of reasoning skills developed by their culture over a long period of time."

That's precisely the point; the biological difference between humans is not that great, so the huge differences we see in human accomplishment must be due in large part to other factors.

"The simplest best theory we have for precisely predicting an arbitrary 12 grams of carbons behaviour over time requires avogadros of data for the different degrees of freedom of the start state, the electron energy states etc."

No, it doesn't; the Standard Model only has eighteen adjustable parameters (physical constants) that must be found through experiment.

"The minor tweaks in brain design allowed enormous improvements in cognitive performance, and I think that the intelligence scale should reflect the performance differences rather than the anatomical ones."

The difference between humans and chimps is fairly small anatomically; we share 95-98% of our DNA and most of our brain architecture. The huge difference between a civilization inhabited entirely by village idiots and a civilization of chimps is obvious.

"Eliezer, I think this whole frame of analysis has an element of ego-stroking/sour grapes (stroking your ego and perhaps the ego of your reading audience that defines brainy as being Einstein-like, and that defines social success as being inversely correlated, because y'all are more Einstein-like than you're socially successful)."

Social success will gradually become more irrelevant as society develops further, because social success is a zero-sum game; it doesn't produce anything of value. Dogs, orangutans, and chimps all have complex social structures. Dogs, orangutans, and chimps would all currently be extinct if we didn't have domesticated animals and environmentalists.

"The empiricism based seduction community indicates a braininess advantage in being able "to play well with the other kids"."

If you define braininess as social success, social success is obviously going to correlate with braininess. The ability to find an optimal mate is not why people are successful. Monks, who were the closest thing to scholars during the medieval period, explicitly *renounced* the quest for a mate, and they didn't do too badly by the standards of their time period.

"I've resisted this thread, but I'm more interested in James Simon and the google founders as an example as the high end of braininess than the Albert Einsteins of today."

If you're referring to this James Simon (http://en.wikipedia.org/wiki/James_Simon), he is obviously less accomplished than Newton, Einstein, etc., by any reasonable metric. Larry Page and Sergey Brin are rich primarily because they were more interested in being rich than in publishing papers. They sure as heck didn't become rich because they knew how to win a high school popularity contest; Bill Gates, the most famous of the dot-com billionaires, is widely reputed to be autistic.

In response to Einstein's Speed
Comment author: Tom_McCabe2 21 May 2008 11:13:03PM 0 points [-]

"Celeriac, the distinction is that Tom McCabe seemed to me to be suggesting that the search space was small to begin with - rather than realizing the work it took to cut the search space itself down."

The search space, within differential geometry, was fairly small by Einstein's day. It was a great deal of work to narrow the search space, but most of it was done by others (Conservation of Energy, various mathematical theorems, etc., were all known in 1910). The primary difficulties were in realizing that space *could* be described by differential geometry, and then in deriving GR from known postulates. Neither of these involve large search spaces; the former follows quickly once you realize that your assumptions are inconsistent with Minkowski space, and there's only one possible derivation of GR if you do the math correctly. I don't know why the first one is hard, but Einstein showed twice that physicists are very reluctant to question background assumptions (linear time for SR, Euclidean space for GR), so we know it must be. The second one is hard because the human brain does not come equipped with a differential geometry lobe- it took me several hours to fully understand the derivation of the Schwarzschild solution from its postulates, even though the math is simple by GR standards and there is only one possible answer (see http://en.wikipedia.org/wiki/Deriving_the_Schwarzschild_solution).

View more: Prev | Next