All of Grognor's Comments + Replies

Grognor30

I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.

Grognor570

It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Munenori, The Life-Giving Sword

Grognor10

You may find it felicitous to link directly to the tweet.

0[anonymous]
You responded to the wrong post or gave the wrong link. I do see your point, fixed both quotes.
Grognor270

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

RobinZ160

Actually, according to Wikipedia, only 35 out of the 97 people aboard were killed. Not enough to kill even 50% of them.

Grognor-10

I have become 30% confident that my comments here are a net harm, which is too much to bear and so I am discontinuing my comments here unless someone cares to convince me otherwise.

Edit: Good-bye.

[This comment is no longer endorsed by its author]Reply

What the hell? Did you catch Konkvistador disease or something? What is up with high-quality contributors deciding to up and leave?

I, for one, think that your posts are valuable to the community.

(I'm assuming you mean that you suspect your comments cause harm to others, obviously if you think you're spending to much time procrastinating on LessWrong then leaving is fine.)

Grognor00

Which is not the same thing as expecting a project to take much less time than it actually will.

Edit: I reveal my ignorance. Mea culpa.

[This comment is no longer endorsed by its author]Reply
7novalis
I am using the more generalized definition. Wikipedia says:
Grognor130

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

2Mitchell_Porter
Phenomenology is the study of appearances. The only part of the universe that it is directly concerned with is "you experiencing existence". That part of the universe is anthropomorphic by definition.
0David_Allen
It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should... and on up the meta-chain. It isn't clear why such a system wouldn't have access to any ontology that is accessible by the human mind.
Grognor20

That isn't the planning fallacy.

1novalis
Intel kept throwing money at the project for years, indicating that they must have been planning on the basis of these predictions.
Grognor20

This is a better explanation than I could have given for my intuition that physicalism (i.e. "the universe is made out of physics") is a category error.

Grognor60

Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.

-Reply to Holden on Tool AI

Grognor00

Nonsense. The problem has posed has always been around, and the solution is just to avoid repeating the same state twice, because that results in a draw.

Grognor40

I like this. I was going to say something like,

"Suppose , what does that say about your solutions designed for real life?" and screw you I hate when people do this and think it is clever. Utility monster is another example of this sort of nonsense.

but you said the same thing, and less rudely, so upvoted.

Grognor00

Perhaps it was imprudent, but I assumed that someone trying to promote rationality would herself be rational enough to overcome this parochialism bias.

Grognor20

"Any and all cost" would subsume low probabilities if it were true (which, of course, it is not).

0Shmi
I don't see how. To me cost is what you pay, not what you get. If the poll said "an unknown but probably extremely small chance of cybernetic immortality", then it could be comparable to cryonics.
Grognor90

The poll results are intriguing. 35% would want cybernetic immortality at any and all cost! And yet I don't see 35% of people who can afford it signed up for cryonics.

-2selylindi
Hey, I'd be more than willing to sign up for a couple hundred years of indentured servitude if that's what it took to pay for several thousand subsequent years. But I can't afford cryonics, and the fact that no one is willing to take up that offer of indentured servitude as payment for cryonics is very strong evidence against cryonics having a noteworthy probability of success.

35% of people who read the article and took the quiz right after reading. Admittedly the numbers are larger than I naively expected to get past each stage of filtering. I was thinking maybe 300 people taking the poll total and its over 30k now, with over 10k answering "at any cost". (I would love to see the log files for that page... referring URLs, IP addresses, etc.)

Shmi270

Because cryonics is not even close to "certain immortality now".

0orthonormal
Meta-anthropics is fun!
Grognor00

A failure or so, in itself, would not matter, if it did not incur a loss of self-esteem and of self-confidence. But just as nothing succeeds like success, so nothing fails like failure. Most people who are ruined are ruined by attempting too much. Therefore, in setting out on the immense enterprise of living fully and comfortably within the narrow limits of twenty-four hours a day, let us avoid at any cost the risk of an early failure. I will not agree that, in this business at any rate, a glorious failure is better than a petty success. I am all for the

... (read more)
Grognor40

I personally found the research in Influence rather lacking and thought Cialdini speculated too much. But chapter 3 of the book is dead on.

Grognor160

Do people think superrationality, TDT, and UDT are supposed to be useable by humans?

I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.

But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks... (read more)

9A1987dM
Some ways humans act resemble TDT much more than they resemble CDT: some behaviours such as voting in an election with a negligible probability of being decided by one vote, or refusing small offers in the Ultimatum game, make no sense unless you take in account the fact that similar people thinking about similar issues in similar ways will reach similar conclusions. Also, the one-sentence summary of TDT strongly reminds me of both the Golden Rule and the categorical imperative. (I've heard that Good and Real by Gary Drescher discusses this kind of stuff in detail, though I haven't read the book itself.) (Of course, TDT itself, as described now, can't be applied to anything because of problems with counterfactuals over logically impossible worlds such as the five-and-ten problem; but it's the general idea behind it that I'm talking about.)
3Randaly
Some people think so; they are wrong. (Examples: 1, 2, 3, 4, 5, 6, 7. Most of these take overly broad vague definitions of a person's "platonic algorithm"; #5 is forgetting that natural selection acts on the level of genes, not people.) Eliezer: "This is primarily a theory for AIs dealing with other AIs." Unfortunately, it's difficult to write papers or fiction publicizing TDT that solely address AI's- especially when the description of TDT needs to be in a piece of Harry Potter fanfiction. On a slightly more interesting side note, if TDT were applicable in real life, people would likely be computation hazards, since a simulation of another person accurate enough to count as implementing the same, simulated platonic algorithm as the one they actually use would also be quite possibly be complex enough to be a person.
0Larks
Why do you think we would need to get everyone to use UDT for it to be useful to you? It's not like UDT can't deal with non-UDT agents.
-3drethelin
TDT is not even that good at cooperating with yourself, if you're not in the right mindset. The notion that "If you fail at this you will fail at this forever" is very dangerous to depressed people, and TDT doesn't say anything useful (or at least nothing useful has been said to me on the topic) about entities that change over time, ie Humans. I can't timelessly decide to benchpress 200 pounds whenever I go to the gym, if I'm physically incapable of it.
7Vladimir_Nesov
It's perhaps more useful to see these as (frameworks for) normative theories, describing which decisions are better than their alternatives in certain situations, analogously to how laws of physics say which events are going to happen given certain conditions. It's impossible in practice to calculate the actions of a person based on physical laws, even though said actions follow from physical laws, because we lack both the data and the computational capabilities necessary to perform the computation. Similarly, it's impossible in practice to find recommendations for actions of a person based on fundamental decision theory, because we lack both the problem statement (detailed descriptions of the person, the environment, and the goals) and computational capabilities (even if these theories were sufficiently developed to be usable). In both cases, the problem is not that these theories are "impossible to implement in humans"; and certain approximations of their conclusions can be found.
Grognor140

An expert on political ruthlessness, not an expert at political ruthlessness!

Grognor10

My current guess is that having the knows-the-solution property puts them in a different reference class. But if even a tiny fraction deletes this knowledge...

Grognor10

The part you highlight about shminux's comment is correct, but this part:

this would define "looks attractive to a certain subset of humans"

is wrong; attractiveness is psychological reactions to things, not the things themselves. Theoretically you could alter the things and still produce the attractiveness response; not to mention the empirical observation that for any given thing, you can find humans attracted to it. Since that part of the comment is wrong but the rest of it is correct, I can't vote on it; the forces cancel out. But anyway I ... (read more)

Grognor50

For a while, I assumed that I would never understand UDT. I kept getting confused trying to understand why an agent wouldn't want or need to act on all available information and stuff. I also assumed that this intuition must simply be wrong because Vladimir Goddamned Nesov and Wei Motherfucking Dai created it or whatever and they are both straight upgrades from Grognor.

Yesterday, I saw an exchange involving Mitchell Porter, Vladimir Nesov, and Dmytry_messaging. The latter of these insisted that one-boxing in transparent Newcomb's (when the box is empty) wa... (read more)

Grognor110

Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited. Even if they were true, I think it was a stretch to go from those to postulating a generalized verbal overshadowing bias.

With the benefit of hindsight I can say that this post was probably a mistake, which leaves me a bit dumbfounded at its karma score of 61 and endorsement by Newsome. When I scrolled down to the bottom I saw that I had already downvoted it, which made me even more confused.

-6Will_Newsome
1wedrifid
Where? Was this after the time that the post was written?
Grognor30

I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.

Also, please don't call what we do here, "rationalism". Call it "rationality".

Grognor40

On reflection, I agree with you and will be downvoting all of Clippy's comments and those of all other abusive sockpuppets I'm aware of.

Grognor110

I really wish you would have put a disclaimer on these posts the likes of:

One of the assumptions The Art of Strategy makes is that rational agents use causal decision theory. This is not actually true, but I'll be using their incorrect use of "rationality" in order to make you uncomfortable.

Anyway,

Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) [...]

this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insu... (read more)

0wedrifid
Is there a malformed link in there where the ")" appears?
Grognor10

I initially had the parent upvoted, but I retracted it on learning that the grandparent comment is speaking from experience, and since I have the same experience, it's difficult not to believe.

Grognor40

Could someone please explain to me exactly, precisely, what a utility function is? I have seen it called a perfectly well-defined mathematical object as well as not-vague, but as far as I can tell, no one has ever explained what one is, ever.

The words "positive affine transformation" have been used, but they fly over my head. So the For Dummies version, please.

6VincentYu
Given an agent with some set X of choices, a utility function u maps from the set X to the real numbers R. The mapping is such that the agent prefers x1 to x2 if and only if u(x1) > u(x2). This completes the definition of an ordinal utility function. A cardinal utility function satisfies additional conditions which allow easy consideration of probabilities. One way to state these conditions is that probabilities defined on X are required to be linear over u. This means that we can now consider probabilistic mixes of choices from X (with probabilities summing to 1). For example, one valid mix would be 0.25 probability of x1 with 0.75 probability of x2, and a second valid mix would be 0.8 probability of x3 with 0.2 probability of x4. A cardinal utility function must satisfy the condition that the agent prefers the first mix to the second mix if and only if 0.25u(x1) + 0.75u(x2) > 0.8u(x3) + 0.2u(x4). Cardinal utility functions can also be formalized in other ways. E.g., another way to put it is that the relative differences between utilities must be meaningful. For instance, if u(x1) - u(x2) > u(x3) - u(x4), then the agent prefers x1 to x2 more than it prefers x3 to x4. (This property need not hold for ordinal utility functions.) Other notes: * In my experience, ordinal utility functions are normally found in economics, whereas cardinal utility functions are found in game theory (where they are essential for any discussion of mixed strategies). Most, if not all, discussions on LW use cardinal utility functions. * The VNM theorem is an incredibly important result on cardinal utility functions. Basically, it shows that any agent satisfying a few basic axioms of 'rationality' has a cardinal utility function. (However, we know that humans don't satisfy these axioms. To model human behavior, one should instead use the descriptive prospect theory.) * Beware of erroneous straw characterizations of utility functions (recent example). Remember the VNM theorem—very fru
1Shmi
Wiktionary seems to have a decent definition. It boils down to listing all possible outcomes and ordering them according to your preferences. The words "affine transformation" reflect the fact that all possible ways to assign numbers to outcomes which result in the same ordering are equivalent.
0[anonymous]
Isn't this a Wikipedia article on them?
Grognor60
  • The first half of the first sentence of your comment is incomprehensible. "If you throw enough money into the hole that you spend less" is somewhere between gibberish and severely sleep-deprived mumblings.

  • A -2 score means almost nothing. Two people downvoted your comment, so what?

  • Calling the community "pathetic" in response to downvotes is hypocrisy.

  • In all likelihood, the downvoters did not even recognize your "Keynesian slant" and were downvoting because signaling. Oftentimes people will downvote a -1 comment just by t

... (read more)
-2homunq
Point-by-point response: * Thank you. Edited to clarify the meaning. * -2 is certainly at least two standard deviations down for my comments (excluding the "whine" followup, which I was fully aware was going to be downvoted and posted anyway), so I considered it a clear enough signal to be worthy of comment. [Edit: on reconsidering, I realize that I meant below 2.5th percentile, not really 2 standard deviations down. Variance is large, so 2 SD down would have to be almost as far down as one of my best comments is up.] * Hypocrisy? I don't think that word means what you think it means. * Fair enough. * Good points about generalizing and about the meaning of "groupthink". I meant "epistemic closure", not "groupthink", and that was sloppy. * "anything other": I still think it was a reasonable hypothesis (though your use of the word "conspiracy" is an unfair caricature; I was fully aware that it was just two people, and never said otherwise); but if I read you as saying that I should have thought "anything other" as well as this hypothesis, then your point is telling. I did fail to consider any specific alternative hypotheses (mentioning them only generally at the end), and thus deserved the -4 I got for my whine.
Grognor50

I believe this term is used solely to countersignal and has no more technical meaning than "guy I don't like who defends females".

0[anonymous]
Yep.
Grognor20

Hello, friend, and welcome to Less Wrong.

I do think you should start a discussion post, as this seems clearly important to you.

My advice to you at the moment is to brush up on Less Wrong's own atheism sequence. If you find that insufficient, then I suggest reading some of Paul Almond's (and I quote):

great atheology

If you find that insufficient, then it is time for the big guy, Richard Dawkins:

If you are somehow still unsatisfied after all this, lukeprog's new website should direct you to some other resources, ... (read more)

Grognor20

It occurred to me that although I agree with Statement #5 - "People are morally responsible for the opportunity costs of their actions," I do not think it is a claim actually being made by the optimal philanthropy zeitgeist. I think the actual claim is "Your actions have opportunity costs and you should probably think about that," which should be uncontroversial.

Grognor00

The "rationality" link in the "Bonuses" section has become broken.

Grognor20

After the addition I just made, my list contains 58 items, and yours contains 47. If you feel like updating. (Also, I removed one, because William Eden deleted one of his accounts.)

Grognor50

Comments section makes for interesting reading.

No it doesn't; it is the standard cloud of thought-failure.

7D_Alex
Well, perhaps I should have been more specific about what I found interesting. Remember, this is a mainstream news rag: * there are 103 comments as I write this. This is an exceptionally high number, few "The Age" articles attract such volume. * over half the comments are reasonably well thought out and relevant... not only a high number but an astonishingly high proportion, considering this is a mainstream publication and considering the usual standard of responses to The Age articles * of the "well thought out and relevant" comments, about 60% by my count support the idea of a technological singularity... again, a far higher proportion than I would have expected. The conclusion I drew from reading (well... skimming, mostly) the comments is that the idea of a forthcoming technological singularity has far more penetration and support than I previously thought.
Grognor30

Do you see that as a good thing?

1Raw_Power
Seeing improvements in ways that are immediately tabgible is very encouraging and motivating.
2Nick_Tarleton
I see it as a true thing, and thus something to cooperate with. Normatively, I see it as instrumentally bad, but related to something I want to protect.
0[anonymous]
No, as a true thing.
Grognor40

So, let's take this hypothetical (harrumph) youth. They see irrationality around them, obvious and immense, they see the waste and the pain it causes. They'd like to do something about it. How would you advise them to go about it?

Donate to CFAR. There's no good reason to demand a local increase in rationality.

[...]should we try to distance ourselves from atheism and anti-religiousness as such? Is this baggage too inconvenient, or is it too much a part of what we stand for?

We don't stand for atheism; we stand by atheism, prepared to walk away at any ... (read more)

0blogospheroid
I think that in the long run, donating to sens might be a better idea, right? Nothing would dilute religion more than the prospect of a very long life on earth. Looking at westerners and east asians with a little grudging envy because they are rich and happy is probably an order of magnitude less worse than the realisation that they are going to be like that forever, while you die, your sons die and their sons die.
3Nick_Tarleton
People with a desire to improve things generally have a very strong desire to spend some of that effort contributing to and seeing local improvements.
Grognor60

The title of this post tempted me to make another article called "Eliezer apparently right about just about everything else" but I already tried that and it was a bad idea.

8handoflixue
Have you actually catalogued a comprehensive list of Eliezer's predictions, and which ones have been show correct, wrong, and indecisive?
6JoshuaZ
Is he right when he says to beware cached thoughts and where he says to beware when confronting new evidence simply repeating one's pre-existing evidence and arguments rather than actually updating?
Grognor00

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

Indeed, this is one of many reasons why I am starting to think "go meta" is really, really good advice.

Edit: Clarification, what I mean is that I think virtue ethics, deontology, utilitarianism, and the less popular ethical theories agree way more than their proponents think they do. At this point this is still a guess.

0TimS
I don't follow. Discussing theories of morality is already quite meta from the object level moral decisions we face in our daily lives. Going another level of meta is unlikely to illuminate - it certainly doesn't seem likely to be helpful in doing the impossible.
Grognor-20

[...]and related-to-rationality enough to deserve its own thread.

I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite comp... (read more)

0TimS
My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right." But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.
0[anonymous]
My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right." But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.
0[anonymous]
My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right." But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.
Grognor-10

Irrationality game comment

The correct way to handle Pascal's Mugging and other utilitarian mathematical difficulties is to use a bounded utility function. I'm very metauncertain about this; my actual probability could be anywhere from 10% to 90%. But I guess that my probability is 70% or so.

Grognor10

Could have hyperlinked it to the article.

Grognor60

On the People page, the picture next to Richard Carrier's name is the same as the picture next to Richard Boyd's.

and that's because science.

That made me smile, [edit] especially because I had just recently read the Language Log article Because NOUN.

The rest of the video made me kind of uncomfortable, though, because it felt like (and I guess sort of was) an advertisement, and you keep saying "worldview naturalism" where anyone else would have said "the naturalistic worldview" or just "naturalism".

(And this is just a person... (read more)

1Nectanebo
Seconded, that felt unnatural and kinda irked me.
Grognor10

Signing up was a waste of time. This course is for people who don't know anything at all.

Grognor00

Much to everyone else's chagrin.

Grognor00

Nope, I'm talking about the humans' in questions subjective "nows", not their futures. Although if a person isn't particularly rational and has never heard of rationality and if you mentioned it to him he wouldn't feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there's no choice to make, no will, only a default path.

Load More