All of Tyrrell_McAllister2's Comments + Replies

michael vassar:

[Nerds] perceive abstractions handed to them explicitly by other people more easily than patterns that show up around them. Oddly, this seems to me to be a respect in which nerds are more feminine rather than being more masculine as they are in many other ways.
Would you elaborate on this? What is the generally-feminine behavior of which the first sentence describes an instance?

My first inclination would be to think that your first sentence describes something stereotypically masculine. It's an example of wanting things to come in pre-struc... (read more)

George Weinberg:

Does it occur to anyone else that the fable is not a warning against doing favors in general but of siding with "outsiders" against "insiders"?
Wow; now that you mention it, that is a blatant recurring theme in the story. I now can't help but think that that is a major part, if not the whole, of the message. Each victim betrays an in-group to perform a kindness for a stranger. It's pretty easy to see why storytellers would want to remind listeners that their first duty is to the tribe. Whatever pity they might feel for a stranger, they must never let that pity lead them to betray the interests of their tribe.

Can't believe I missed that :).

Some here seem to think it significant that the good-doers in the story are not naive fools over whom the audience can feel superior. It is argued that that sense of superiority explains stories like the Frog and the Scorpion in the West. The inference seems to be that since this sense of superiority is lacking in this African tale, the intent could only have been to inform the audience that this is how the world works.

However, I don't think that the "superiority" explanation can be so quickly dismissed. To me, this story works because the aud... (read more)

Paul Crowley:

One trivial example of signalling here is the way everyone still uses the Computer Modern font. This is a terrible font, and it's trivial to improve the readability of your paper by using, say, Times New Roman instead, but Computer Modern says that you're a serious academic in a formal field.
I don't think that these people are signaling. Computer Modern is the default font for LaTeX. Learning how to change a default setting in LaTeX is always non-trivial.

You might argue that people are signaling by using LaTeX instead of Word or whatever, but switching from LaTeX to some other writing system is also not a trivial matter.

0sketerpot
In case anybody was wondering, just add this to your LaTeX file: \usepackage{times} It's actually pretty trivial. I always do this, though it took me a while. Also, in the papers I see online, they've usually done this.

Eliezer, the link in your reply to nazgulnarsil links to this very post. I'm assuming that you intended to link to that recent post of yours on SJG, but I'll leave it to you to find it :).

I think that you make good points about how fiction can be part of a valid moral argument, perhaps even an indispensable part for those who haven't had some morally-relevant experience first-hand.

But I'm having a hard time seeing how your last story helped you in this way. Although I enjoyed the story very much, I don't think that your didactic purposes are well-served by it.

My first concern is that your story will actually serve as a counter-argument for rationality to many readers. Since I'm one of those who disagreed with the characters' choice to des... (read more)

Psy-Kosh: Yeah, I meant to have a "as Psy-Kosh has pointed out" line in there somewhere, but it got deleted accidentally while editing.

ad:

How many humans are there not on Huygens?

I'm pretty sure that it wouldn't matter to me. I generally find on reflection that, with respect to my values, doing bad act A to two people is less than twice as bad as doing A to one person. Moreover, I suspect that, in many cases, the badness of doing A to n people converges to a finite value as n goes to infinity. Thus, it is possible that doing some other act ... (read more)

If the Super-Happies were going to turn us into orgasmium, I could see blowing up Huygens. Nor would it necessarily take such an extreme case to convince me to take that extreme measure. But this . . . ?

"Our own two species," the Lady 3rd said, "which desire this change of the Babyeaters, will compensate them by adopting Babyeater values, making our own civilization of greater utility in their sight: we will both change to spawn additional infants, and eat most of them at almost the last stage before they become sentient." ... &quo
... (read more)

Wei Dai: Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

I don't know the proper rational-choice-theory terminology, but wouldn't modeling this program just be a matter of describing the "space" of choices correctly? That is, rather than making the space of choices {A, B, C}, make it the set containing

(1) = taking A when offered A and B, (2) ... (read more)

It's good. Not baby-eatin' good, but good enough ;).

Daniel Dennett's standard response to the question "What's the secret of happiness" is "The secret of happiness is to find something more important than you are and dedicate your life to it."

I think that this avoids Eliezer's criticism that "you can't deliberately pursue 'a purpose that takes you outside yourself', in order to take yourself outside yourself. That's still all about you." Something can be more important than you and yet include you. Depending on your values, the future of the human race itself could serve as ... (read more)

But has that been disproved? I don't really know. But I would imagine that Moravec could always append, ". . . provided that we found the right 10 trillion calculations." Or am I missing the point?

Here's a Daniel Dennett essay that seems appropriate:

THANK GOODNESS!

Maybe it was the categorical nature of "no danger whatsoever" that led to the comparisons to religion. Given the difficulty of predicting anyone's psychological development, and given that you yourself say that you've seen multiple lapses before, what rational reason could you have for such complete confidence? Of course, it's true that there are things besides religion that cause people to make predictions with probability 1 (which, you must concede, is a plausible reading of "no danger whatsoever"). But, in human affairs, with our present state of knowledge, can such predictions ever be entirely reasonable?

anon and Chris Hibbert, I definitely didn't mean to say that Robin is claiming to be working with as much certainty as Fermi could claim. I didn't mean to be making any claim about the strength or content of Robin's argument at all, other than that he's assigning low probability to something to which Eliezer assigns high probability.

Like I said, the analogy with the Fermi story isn't very good. My point was just that a critique of Fermi should have addressed his calculations, pointing out where exactly he went wrong (if such a point could be found). Eli... (read more)

I've been following along and enjoying the exchange so far, but it doesn't seem to be getting past the "talking past each other" phase.

For example, the Fermi story works as an example of a cycle as a source of discontinuity. But I don't see how it establishes anything that Robin would have disputed. I guess that Eliezer would say that Robin has been inattentive to its lessons. But he should then point out where exactly Robin's reasoning fails to take those lessons into account. Right now, he just seems to be pointing to an example of cycles a... (read more)

Tim Tyler,

I don't yet see why exactly Eliezer is dwelling on the origin of replicators.

Check with the title: if you are considering the possibility of a world takeover, it obviously pays to examine the previous historical genetic takeovers.

Right. I get the surface analogy. But it seems to break down when I look at its deeper structure.

0timtyler
You don't think we are looking at a memetic takeover? What other outcome is plausible?

Oops; I should have noted that I added emphasis to those quotes of Eliezer. Sorry.

I don't yet see why exactly Eliezer is dwelling on the origin of replicators. As Robin said, it would have been very surprising if Robin had disagreed with any of it.

I guess that Eliezer's main points were these: (1) The origin of life was an event where things changed abruptly in a way that wouldn't have been predicted by extrapolating from the previous 9 billion years. Moreover, (2) pretty much the entire mass of the universe, minus a small tidal pool, was basically irrelevant to how this abrupt change played out and continues to play out. That is, t... (read more)

gaffa: A heavy obstacle for me is that I have a hard time thinking in terms of math, numbers and logic. I can understand concepts on the superficial level and kind of intuitively "feel" their meaning in the back of my mind, but I have a hard time bringing the concepts into the frond of my mind and visualize them in detail using mathematical reasoning. I tend to end up in a sort of "I know that you can calculate X with this information, and knowing this is good enough for me"-state, but I'd like to be in the state where I am using the i
... (read more)
Eliezer Yudkowsky: In other words, none of this is for mature superintelligent Friendly AIs, who can work out on their own how to safeguard themselves.

Right, I understood that this "injunction" business is only supposed to cover the period before the AI's attained maturity.

If I've understood your past posts, an FAI is mature only if, whenever we wouldn't want it to perform an action that it's contemplating, it (1) can figure that out and (2) will therefore not perform the action. (Lots of your prior posts, for example, dealt with unpacking wha... (read more)

Maybe I'm not being clear about how this would work in an AI! The ethical injunction isn't self-protecting, it's supported within the structural framework of the underlying system. You might even find ethical injunctions starting to emerge without programmer intervention, in some cases, depending on how well the AI understood its own situation. But the kind of injunctions I have in mind wouldn't be reflective - they wouldn't modify the utility function, or kick in at the reflective level to ensure their own propagation. That sounds really scary, to me
... (read more)

No one else has brought this up, so maybe I'm just dense, but I'm having trouble distinguishing the "point" from the "counterpoint" at this part of the post:

Elezier makes a "point":

So I suggest (tentatively) that humans naturally underestimate the odds of getting caught. We don't foresee all the possible chains of causality, all the entangled facts that can bring evidence against us. Those ancestors who lacked a sense of ethical caution stole the silverware when they expected that no one would catch them or punish them; an
... (read more)

Who or what is the Omega cited for the quote "Many assumptions that we have long been comfortable with are lined up like dominoes." ?

Who or what is the Omega cited for the quote "Many assumptions that we have long been comfortable with are lined up like dominoes." ?

Benja, I have never studied Solomonoff induction formally. God help me, but I've only read about it on the Internet. It definitely was what I was thinking of as a candidate for evaluating theories given evidence. But since I don't really know it in a rigorous way, it might not be suitable for what I wanted in that hand-wavy part of my argument.

However, I don't think I made quite so bad a mistake as highly-ranking the "we will observe some experimental result" theory. At least I didn't make that mistake in my own mind ;). What I actually wrot... (read more)

Hi, Anna. I definitely agree with you that two equally-good theories could agree on the results of experiments 1--20 and then disagree about the results of experiment 21. But I don't think that they could both be best-possible theories, at least not if you fix a "good" criterion for evaluating theories with respect to given data.

What I was thinking when I claimed that in my original comment was the following:

Suppose that T1 says "result 21 will be X" and theory T2 says "result 21 will be Y".

Then I claim that there is another... (read more)

"One small nitpick: It could be more explicit that in Assumption 2, B1 and B2 range over actual observation, whereas in Assumption 1, B ranges over all possible observations. :)"

Actually, I implicitly was thinking of the "B" variables as ranging over actual observations (past, present, and future) in both assumptions. But you're right: I definitely should have made that explicit.

I wrote in my last comment that "T2 is more likely to be flawed than is T1, because T2 only had to post-dict the second batch. This is trivial to formalize using Bayes's theorem. Roughly speaking, it would have been harder for T1 to been constructed in a flawed way and still have gotten its predictions for the second batch right."

Benja Fallenstein asked for a formalization of this claim. So here goes :).

Define a method to be a map that takes in a batch of evidence and returns a theory. We have two assumptions

ASSUMPTION 1: The theory produced b... (read more)

Here's my answer, prior to reading any of the comments here, or on Friedman's blog, or Friedman's own commentary immediately following his statement of the puzzle. So, it may have already been given and/or shot down.

We should believe the first theory. My argument is this. I'll call the first theory T1 and the second theory T2. I'll also assume that both theories made their predictions with certainty. That is, T1 and T2 gave 100% probability to all the predictions that the story attributed to them.

First, it should be noted that the two theories should ... (read more)

You write that "Philosophy doesn't resolve things, it compiles positions and arguments". I think that philosophy should be granted as providing something somewhat more positive than this: It provides common vocabularies for arguments. This is no mean feat, as I think you would grant, but it is far short of resolving arguments which is what you need.

As you've observed, modal logics amount to arranging a bunch of black boxes in very precisely stipulated configurations, while giving no indication as to the actual contents of the black boxes. How... (read more)

Eliezer, would the following be an accurate synopsis of what you call morality?

Each of us has an action-evaluating program. This should be thought of as a Turing machine encoded in the hardware of our brains. It is a determinate computational dynamic in our minds that evaluates the actions of agents in scenarios. By a scenario, I mean a mental model of a hypothetical or real situation. Now, a scenario that models agents can also model their action-evaluating programs. An evaluation of an action in a scenario is a moral evaluation if, and only if, the ... (read more)

Eliezer, you write, "Most goods don't depend justificationally on your state of mind, even though that very judgment is implemented computationally by your state of mind. A personal preference depends justificationally on your state of mind."

Could you elaborate on this distinction? (IIRC, most of what you've written explicitly on the difference between preference and morality was in your dialogues, and you've warned against attributing any views in those dialogues to you.)

In particular, in what sense do "personal preferences depend justific... (read more)

I have to agree with komponisto and some others: this post attacks a straw-man version of logical positivism. As komponisto alluded to, you are ignoring the logical in logical positivisim. The logical positivists believed that meaningful statements had to be either verifiable or they had to be logical constructs built up out of verifiable constituents. They held that if A is a meaningful (because verifiable) assertion that something happened, and B is likewise, then A & B is meaningful by virtue of being logically analyzable in terms of the meaning... (read more)

0Ronny Fernandez
I agree that EY's attacking a certain straw-man of positivism, and that EY is ultimately a logical positive with respect to how he showed the meaningfulness of the boltzman cake hypohteses. But, assuming EY submits to a computational complexity prior, his position is distinct, in that there could be two hypothesis, which we fundamentally cannot tell the difference between, e.g., copenhagen, mwi, and yet we have good reason to believe on over the other, even though there will never be any test that justifies belief in one over another (if you think you can test mwi vs. copenhagen, just replace the universe spawns humans with 10^^^^^10 more quanta in it vs. it doesn't, clearly can't test these, not enough quanta in the universe).

michael vassar, I'm familiar with that book. I haven't read it, but I listened to an hour-long interview with the author here: http://bloggingheads.tv/diavlogs/261

I think that the author made many good points there, and I take his theses seriously. However, I don't think that secrecy is usually the best solution to the problems he points out. I favor structuring the institutions of power so that "cooler heads prevail", rather than trying to keep "warmer heads" ignorant. Decision makers should ultimately be answerable to the people,... (read more)

A government report that is going to be displayed to government officials and the public at large, written by people beholden to public opinion and the whims of government, will be written with those audiences in mind.

Car mechanics and dentists are often paid both to tell us what problems need fixing and to fix them. That's a moral hazard that always exists when the expert who is asked to determine whether a procedure is advisable is the same as the expert who will be paid to perform the procedure.

There are several ways to address this problem. Is it so clear that having the expert determine the advisability in secret is the best way, much less a required way, in this case?

What makes you think they don't?

I acknowledge that they probably do so with some nonzero number of projects. But I take Eliezer to be advocating that it happen with all projects that carry existential risk. And that's not happening; otherwise Eliezer wouldn't have had the example of the RHIC to use in this post. Now, perhaps, notwithstanding the RHIC, the government already is classifying nearly all basic science research that carries an existential threat, but I doubt it. Do you argue that the government is doing that? Certainly, if it's already happ... (read more)

I can say that I'm not joking. Evidently I need to be shown the light.

I wrote, "Wouldn't it just be easier to convince the public to accept a certain amount of risk, to accept debates about trade-offs?"

Zubon replied:

How?

Keeping secrets is a known technology. Overcoming widespread biases is the reason we are here. If you have a way to sway the public on these issues, please, share.

"Keeping secrets" is a vague description of Eliezer's proposal. "Keeping secrets" might be known technology, but so is "convincing the public to accept risks." (E.g., they accept automobile fatality rates.) ... (read more)

Eliezer,

You point to a problem: "You can't admit a single particle of uncertain danger if you want your science's funding to survive. These days you are not allowed to end by saying, "There remains the distinct possibility..." Because there is no debate you can have about tradeoffs between scientific progress and risk. If you get to the point where you're having a debate about tradeoffs, you've lost the debate. That's how the world stands, nowadays."

As a solution, you propose that "where human-caused uncertain existential dange... (read more)