All of MinibearRex's Comments + Replies

He wasn't certain what he expected to find, which, in his experience, was generally a good enough reason to investigate something.

Harry Potter and the Confirmed Critical, Chapter 6

3Paulovsk
Can you give a link to this story? It is surprisingly difficult to find.

I've got to start listening to those quiet, nagging doubts.

Calvin

This phrase was explicitly in my mind back when I was generalizing the "notice confusion" skill.

BTW, the post says that spoilers from the original canon don't need to be in rot13.

Their hearts stop beating, and they stop needing to breathe during the turning process.

0gjm
The same would be true of a real-world medical procedure that replaces the heart and lungs with support-reliable mechanical equivalents. (There are "heart and lung machines" but I believe they're cumbersome and greatly inferior to the natural organs they substitute for. I'm envisaging something much better than that.) Would you consider someone "undead" merely for having been through such a procedure?

I plan to keep doing reruns through "Final Words", which will be posted two days from now. After the reruns are done, I have no particular plans to keep going. I had planned to create a post to prompt discussion as to future plans, but I don't plan to personally do another rerun.

To try to be happy is to try to build a machine with no other specification than that it shall run noiselessly. -Robert Oppenheimer, 1929

I don't think EY actually suggests that people are doing those calculations. He's saying that we're just executing an adaptation that functioned well in groups of a hundred or so, but don't work nearly as well anymore.

The trouble is that there is nothing in epistemic rationality that corresponds to "motivations" or "goals" or anything like that. Epistemic rationality can tell you that pushing a button will lead to puppies not being tortured, and not pushing it will lead to puppies being tortured, but unless you have an additional system that incorporates desires for puppies to not be tortured, as well as a system for achieving those desires, that's all you can do with epistemic rationality.

0torekp
That's entirely compatible with my point.

I think you're confusing Pascal's Wager with Pascal's Mugging. The problem with Pascal's Mugging is that the payoffs are really high. The problem with Pascal's Wager is that it fails to consider any hypotheses other than "there is the christian god" and "there is no god".

3Elithrion
I am not. The problem with Pascal's Wager is sort of that it fails to consider other hypotheses, but not in the conventional sense that most arguments use. Invoking an atheist god, as is often done, really does not counterbalance the Christian god, because the existence of Christianity gives a few bits of evidence in favour of it being true, just like being mugged gives a few bits of probability in favour of the mugger telling the truth. So, using conventional gods and heavens and hells like that won't balance to them cancelling out, and you will end up having to believe one of these gods. On the other hand, the actual problem is that you can keep invoking new gods with fancier and more amazing heavens and hells, so that what you really end up believing is super-ultra-expanded-time-bliss-heaven and then you do whatever you think is required to go there. Which is isomorphic to Pascal's (self-)Mugging. (I should try practising explaining things in fewer words...)

I'm not really sure that counts as faith. Faith usually implies something like "believing something without concern for evidence". And in fact, the evidence I have fairly strongly indicates is that when I step into an airplane, I'm not going to die.

0[anonymous]
Which of the seven models of faith do you think "believing something without concern for evidence" would fall under?
5MugaSofer
As I recall, CS Lewis once defined it as "believing something based on the evidence/logic in the face of irrational doubt" (paraphrased.) I've always preferred that meaning myself, as it retains the positive connotations. Presumably what you describe would be "blind faith".
2Qiaochu_Yuan
The collection of cynicism ordinals is so big that it doesn't form a set, so it is not itself an ordinal, so... not yet.

Probably because very few people propose playing solitaire and Settlers of Catan forever as their version of a Utopia. Spending eternity playing games on the holodeck, however, is frequently mentioned.

By the way, there may be some interruptions to posting sequence reruns over the course of the next week. Unfortunately, I'm going to be traveling and working on an odd schedule that may not let me reliably spend some time daily posting these things. I'll try to get to it as much as possible, but I apologize in advance if I miss a few days.

I tend to use the word fun.

We finish with high confidence in the script's authenticity

If you're already familiar this particular leaked 2009 live-action script, please write down your current best guess as to how likely it is to be authentic.

Unless someone already tried to come up with an explicit probability, this ordering will bias the results. Ask people for their guesses before you tell them what you have already written on the subject.

0gwern
Well, no one familiar with the script before reading this essay seems to have reported anything. That was a bit sloppy on my part, though.

Your competition story qualifies you for an upvote, for munchkinry.

It's a pretty good idea for a sentence, too.

I will note that this seems as though it ought to be a problem that we can gather data on. We don't have to theorize if we can look find a good sampling of cases in which a minister said they would resign, and then look at when they actually resigned.

Additionally, this post is mostly about a particular question involving anticipating political change, but the post title sounds like a more abstract issue in probability theory (how we should react if we learn that we will believe something at some later point).

And with this post, we have reached the last post in the 2008 Hanson-Yudkowsky AI Foom Debate. Starting tomorrow, we return to the regularly scheduled sequence reruns, and start moving into the Fun Theory Sequence.

I would advise putting a little bit more effort into formatting. Some of the font jumps are somewhat jarring, and prevent your post from having as much of an impact as you might hope.

7sbenthall
thanks. I'm new to this editor. will fix.
5Vaniver
Similarly, a number of words are incorrect (view->few, I think) and the footnote ends in the middle of a sentence.

The wikipedia page) on the blind spot contains a good description, as well as a diagram of vertebrate eyes alongside the eye of an octopus, which does not have the same feature.

0Fadeway
Thanks. That test was fun.

I'm sorry you had to go through this. I've been to three Catholic funerals over the past two years, and found them both to be particularly painful. I actually refused requests to perform readings, and thought about doing a eulogy like this. I didn't, and I'm impressed that you had the courage to do so.

3MileyCyrus
You showed more courage than I did. I wish I had come out as an atheist before my grandmother's funeral.

In discussions about a month or so ago, people expressed interest in running posts by Hanson, as well as a few others (Carl Shulman and James Miller), as part of the AI FOOM Debate. This is the 12th post in that debate by someone other than Yudkowsky. There are, after today, 18 more posts in the debate left, of which 9 are by Hanson. After that, we will return to the usual practice of just rerunning Yudkowsky's sequences.

0MichaelAnissimov
OK, I figured it was something like that. Thanks.

Every now and then, the Wiki seems to decide that my IP address is spamming the Wiki, and autoblock it. Sometimes it goes away in a day or so, and sometimes it doesn't. In the event that it doesn't, making a new username seems to resolve the issue, for some reason. I'm currently on account number 4, named "Wellthisisaninconvenience". Which is different from my previous account, "Thisisinconvenient".

Perhaps there is nothing in Nature more pleasing than the study of the human mind, even in its imperfections or depravities; for, although it may be more pleasing to a good mind to contemplate and investigate the application of its powers to good purposes, yet as depravity is an operation of the same mind, it becomes at least equally necessary to investigate, that we may be able to prevent it.

-John Hunter

Don't think, try the experiment.

-John Hunter

4b1shop
In the context of probability theory: Don't prove, try the Monte Carlo.
4soreff
Whether that is good advice or not depends on the evidence already in hand, and the difficulty of the experiment. Will ice survive heating to a million kelvin at standard pressure?

I think nigerweiss is asserting that "The experiment requires that you continue" activates System 1 but not System 2.

8Vaniver
The claim made by the OP is "if people believe in what they're doing, they will hurt people;" the claim made by nigerweiss is "if people use system 1 thinking, they will hurt people." To differentiate between them, we need a statement intended to make people use system 1 thinking without relying on them believing what they are doing. It's not clear to me that nigerweiss's division is more precise than the OP's division, or has significant predictive accuracy. I would have expected "you have no other choice" to evoke 'keep your head down, do what you're told, they must know what they're doing'; that is, the system 1 thinking that nigerweiss claims would lead people to push the button, when it led to less people pushing the button. Why is it a status attack that awakens system 2 (huh?), except because we know what we need to predict?

Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they're overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.

I thought the explanations were just poorly written. But given that Luke, and other seem to have reviewed it positively, I'd guess that it is substantially better than others.

Why does the table indicate that we haven't observed pandemics the same way we've observed wars, famines, and earth impactors?

0Stuart_Armstrong
We can observe past pandemics and past meteor impacts. But we can also observe current and future meteors, predict their trajectories, and see if they're going to be a threat. We can't really do this with pandemic. ie with meteors, we can use the past events and the present observations to predict the future; for pandemics, we can only use past events (to a large extent).

For what it's worth, I haven't found any of the Cambridge Introduction to Philosophy series to be particularly good. The general sense I have is that they're better used as a reference if you can't remember exactly how the professor explained something, than as a source to actually try to learn the topic independently. That being said, I haven't read the Decision theory one, so take this with a grain of salt.

1[anonymous]
Huh. It was recommended by Luke in the Best Textbooks post, and it seemed to have positive reviews. Maybe it's comparatively better than the series in general? What didn't you like about them?

I think any message of this sort is likely to lead to some unpleasantness. "Hey, I just downvoted a whole bunch of your old posts, but it's ok because I actually did think that all of those posts were bad." Downvote things that deserve to get downvoted, but don't make a scene out of it that's just going to poison the discussion.

Are you planning to do anything like the ritual sequence again this year?

2Raemon
Yes, but mostly likely not so extensive.

This post is by James Miller, who posted about a year ago that he was writing a book. It's apparently out now, and seems to have received some endorsements from some recognizable figures. If there's anyone here who's read it, how worthwhile of a read would it be for someone already familiar with the idea of the singularity?

1Kaj_Sotala
I'd recommend it. It's not exactly Earth-shattering, but it made a number of interesting points which I hadn't encountered before, such as pointing out that the mere possibility of a Singularity could by itself be an existential risk if people took it seriously enough. For example, if a major nuclear power thought - correctly or incorrectly - that a hostile country was about to build an AI capable of undergoing a hard takeoff, they could use nuclear weapons against the other country to prevent the completion of the AI, even at the risk of also causing World War III in the process. I also thought the discussion of sexbots was interesting, among other things.
0Vaniver
Hanson's review.

So if I were talking about the effect of e.g. sex as a meta-level innovation, then I would expect e.g. an increase in the total biochemical and morphological complexity that could be maintained - the lifting of a previous upper bound, followed by an accretion of information. And I might expect a change in the velocity of new adaptations replacing old adaptations.

But to get from there, to something that shows up in the fossil record - that's not a trivial step.

I recall reading, somewhere or other, about an ev-bio controversy that ensued when one party spo

... (read more)

As the problems get more difficult, or require more optimization, the AI has more optimization power available. That might or might not be enough to compensate for the increase in difficulty.

1RolfAndreassen
Yes, that's what I said! The point is that, if we are to see accelerating intelligence, the increase in optimising power must more than compensate for the increase in difficulty at every level, or at least on average.

What do people think of this idea? I'm personally interested in reading all of the debate, and I think I will, no matter what I wind up posting, so nobody else needs to feel lonely if they want to see all of it.

I think so, but truth be told I've actually never read through all of it myself. All of the bits of it I've seen seem to indicate that they hold similar positions in those debates to their positions in the original argument.

From the next post in the sequences:

There does exist a rare class of occasions where we want a source of "true" randomness, such as a quantum measurement device. For example, you are playing rock-paper-scissors against an opponent who is smarter than you are, and who knows exactly how you will be making your choices. In this condition it is wise to choose randomly, because any method your opponent can predict will do worse-than-average.

Because there's a simpler hypothesis (gravity) that not only explains the behavior of water, but also the behavior of other objects, motions of the planets, etc. There is still some tiny amount of probability allocated to the optimization hypothesis, but it loses out to the sheer simplicity and explanatory power of competing hypotheses.

0Maelin
I don't think I'm being clear. I don't understand what it means for something to be vs not-be an optimisation process. What features or properties distinguish an optimisation process from a not-optimisation process?

Optimization is a hypothesis. It's a complex hypothesis. You get evidence in favor of the hypothesis that water is an optimization process when you see it avoiding local minimums and steering itself to the lowest possible place on earth.

0Maelin
But who says the water has to optimise for "lowest possible place"? Maybe it's just optimising for "occupying local minima". Out of all the possible arrangements of the water molecules in the entire universe that the water might move towards if you fill a bucket from the ocean and then tip it out again, it sure seems to gravitate towards a select few, pun intended. How can we define optimisation in a way that doesn't let us just say "it's optimising to end up like that" about any process with an end state?

Similarly, OP measures the system's ability to achieve its very top goals, not how hard these goals are. A system that wants to compose a brilliant sonnet has more OP than exactly the same system that wants to compose a brilliant sonnet while embodied in the Andromeda galaxy. Even though the second is plausibly more dangerous. So OP is a very imperfect measure of how powerful a system is.

I'm confused. A system that has to compose a brilliant sonnet and make sure that it exists in the Andromeda galaxy has to hit a smaller target of possible worlds than a... (read more)

I happen to agree with the quote; I just don't think it's particularly a quote about rationality. Just because a quote is correct doesn't mean that it's a quote about how to go about acquiring correct beliefs, or (in general) accomplish your goals. The fact that HIV is a retrovirus that employs an enzyme called reverse transcriptase to copy its genetic code into the host cell is useful information for a biologist or a biochemist, because it helps them to accomplish their goals. But it is rather unhelpful for someone looking for a way to accomplish goals in general.

Libertarian quote, or rationality quote?

-1wedrifid
A libertarian would assert that it is both. (Most others would probably agree with claim or at least with the implied instrumental rationality related message.)

A recent conversation with Michael Vassar touched on - or to be more accurate, he patiently explained to me - the psychology of at least three (3) different types of people known to him, who are evil and think of themselves as "evil". In ascending order of frequency:

...

The second type was a whole 'nother story, so I'm skipping it for now.

Does anyone know what the second type was?

I tried to solve it on my own, but haven't been able to so far. I haven't been able to figure out what sort of function someone who knows that I'm using UDT will use to predict my actions, and how my own decisions affect that. If someone knows that I'm using UDT, and I think that they think that I will cooperate with anyone who knows I'm using UDT, then I should break my word. But if they know that...

In general, I'm rather suspicious of the "trust yourself" argument. The Lake Wobegon effect would seem to indicate that humans don't do it well.

0Manfred
If you're so smart, why ain't you a rock? :P And yeah, at some level you have to be checking for whether or not you are proving what the UDT agent will do - if you prove it you're safe, and if you don't you're not. The trouble is that checking for the proof can contain all the steps of the proof, in which case you might get things wrong because your search wasn't checking itself! So one way is to check for the proof in a way that doesn't correlate with the specific proof. "Did I check any proofs? No? Better not trust anyone."

If you try to add to that category people who know that, but think that they are smart enough, then it gets tricky. How do I know whether I actually am smart enough, or whether I just think I'm smart enough.

0Manfred
Hm, not sure. Obviously on the object level you can just prove what the UDT agent will do. But not being able to do that is presumably why you're uncertain in the first place. Still, I think people should usually just trust themselves. "I don't think I'm a rock, and a rock doesn't think it's a rock, but that doesn't mean I might be a rock."

I agree with Decius. Do you have a wiki account, so you can post your own edit under your own name?

1Decius
I don't have a wiki account, and I don't feel the need to retain credit. I might clean it up a bit if I was putting it out for summary rather than for discussion: 'Don't lie or use other black arts to convince others to believe what you believe, because...' I'm not sure if using this line to convince others that lies are immoral is hypocrisy, meta-hypocrisy, or both. The statement itself is purely rationalist pragmatism.
Load More