All of Yasser_Elassal's Comments + Replies

BAD: While the venue was a good choice for several reasons (especially for group bonding), one of its downsides was that it was somewhat loud. Conversation was still possible for most of us, but there was a hearing impaired LessWronger in attendance who was unfortunately unable to participate in any group conversations. And while it's not always possible to accommodate everyone, it seems that a quieter venue for future meetings would not only benefit him, but facilitate communication for everyone and increase the maximum conversational group size.

2darius
Even with quiet I'm pretty marginal at interacting with a group, I'm afraid, but it's a nice suggestion. (What can work for me as the person in question is one-on-one chat in a quiet environment. So why did I come? Just for the sake of maybe getting surprised -- and I did have a nice chat with mindviews on the way.)

The bathtub was supposed to illustrate the collective property notion, not the status-quo notion.

Well that clears things up then. I realize you never included the word "further", but I had to insert it in order to use your bathtub example to interpret the status quo notion in any meaningful way.

Assuming that had been your intent, the implied reductio was very much part of my point. I didn't think you would want the factory to continue dumping waste, which is why I thought your argument about "status quo" was flawed.

But since you've c... (read more)

2Alicorn
For all practical purposes, I agree completely.

You're either ignoring "absent human action" or taking it to mean something wildly different from what I had in mind.

I took it to mean "absent further human action", which I thought was the only coherent way to interpret your post. (If that's not what you meant, then please forgive the rant.)

If what you really meant was "absent human action at all" (i.e. just nature), then in your original example about koi, the "natural" status quo would not have been no-koi-in-bathtub, but instead no-bathtub-at-all.

So the only w... (read more)

0[anonymous]
Get rid of the roommate. Shower with the koi.
2Alicorn
Of course. However, since I think that nature probably belongs to all humans now and in the future, I couldn't use a nature example without begging the question and having it be giant and cumbersome. The bathtub was supposed to illustrate the collective property notion, not the status-quo notion. You're inserting the word "further". I never included or meant to include the word or notion of "further". Among other things, that would lead to the conclusion that once a factory is already set up to dump waste into a river (for instance), since it'd take further human action to undo that setup, it should be left in place unless everyone agrees to change it. But that's not the answer I want - I think it matters that it took human action to set it up that way to begin with.

I privilege the status quo

I wholeheartedly disagree with this mentality, and I think it's one of the major hindrances to the righting of social injustice. When people feel like they're entitled to "the way things are", it's difficult for them to notice when the status quo is unfair in a way that benefits them at the expense of others.

In your example about the koi fish in the bathtub, the no-koi-containing state of affairs doesn't win out because it's the status quo, but because the disutility of not being able to shower (where there was a reas... (read more)

1Alicorn
You're either ignoring "absent human action" or taking it to mean something wildly different from what I had in mind. Buying a slave is a human action. I used the word "status quo" because we were talking about "nature" - a thing that usually includes in its definition that humans haven't messed with it all that much. I'd have chosen a different term (or more likely, made one up - I don't think there is a good one already for the general case) if the topic had not been nature. If I moved into an apartment only to discover that the only bathtub was home to koi, I think much of my irkedness would stem from having been subject to misleading advertising. Misleading advertising is certainly a human action.

My response was to Christian's implication that a rationality program isn't necessarily buggy for outputting irrational behaviors because it must account for human emotions. My point was that human emotions are part of the human rationality program (whether we can edit our source code or not) and that if they cause an otherwise bug-free rationality program to output irrational behaviors, then the emotions themselves are the bugs.

In your response, you asked about emotions that produce behaviors advantageous to the agent's goals, which is rational behavior, ... (read more)

I suppose you're saying that when a useful heuristic (allowing real-time approximate solutions to computationally hard problems) leads to biases in edge cases, it shouldn't be considered a bug because the trade-off is necessary for survival in a fast-paced world.

I might disagree, but then we'd just be bickering about which labels to use within the analogy, which hardly seems useful. I suppose that instead of using the word "bug" for such situations, we could say that an imprecise algorithm is necessary because of a "hardware limitation"... (read more)

An emotion that doesn't correlate with reality is itself a bug. Sure, it may not be easy to fix (or even possible without brain-hacking), but it's a bug in the human source code nonetheless.

To extend the analogy, it's like a bug in the operating system. If that low-level bug causes a higher-level program to malfunction, you can still blame "buggy code" even if the higher-level program itself is bug-free.

3Nick_Tarleton
Even if it's advantageous to the agent's goals (not evolutionary fitness)? Emotions don't have XML tags that say "this should map to reality in the following way".
0epistememe
A male having a higher opinion of himself (pride) than he realistically deserves may prove evolutionarily advantageous. If this disconnect from reality improves reproductive fitness then it can't be considered a bug.
4Christian_Szegedy
If you would design a system with optimal resource usage for certain operating conditions, then you do not consider a failure outside operating conditions a bug. You can always make the system more and more reliable at the expense of higher resource usage, but even in human engineered systems, over-design is considered to be a mistake. I don't want to argue that the brain is an optimal trade-off in this sense, only that it is extremely hard to tell the genuine bugs from the fixes with some strange side-effects. Maybe the question itself is meaningless. I am rather surprised by the fact that although the human brain was not evolved to be an abstract theorem prover but the controller of a procreation machine, it still performs remarkably well in quite a few logical and rational domains.

To use your analogy. Any person who doesn't provide the expected output is often deemed crazy... It doesn't mean that there is a bug in the person, perhaps sometimes it's a bug in reality.

In the context of my analogy, it's nonsense to say that reality can have bugs.

I suppose you meant that sometimes the majority of people can share the same bug, which causes them to "deem" that someone who lacks the bug (and outputs accordingly) is crazy.

But there's still an actual territory that each program either does or does not map properly, regardless of... (read more)

0jastreich
I suppose what I was referring to is a spec bug; the bug is in expecting the wrong (accepted by society) output. Not an actual "the universe hiccuped and needs to be rebooted." The reason for the spec bug might not be a shared bug, but programs operating on different inputs. For instance, Tesla... Anyone who knew Tesla described him as an odd man, and a little crazy. At the same time, he purposefully filled his input buffer with the latest research on electricity and purposefully processed that data differently than his peers in the field. He didn't spend much time accumulating input on proper social behavior, or on how others would judge him on the streets. It is seen as a crazy thing to do, to pick up wounded pidgins on the street, take them home and nurse them back to health. Because the spec of the time (norms of society) say it was odd to do. An old friend of mine who I haven't seen in years is an artist. He's a creative minded person who thinks that rationality would tie his hands too much. That said, when I was younger it surprised me the types of puzzles he was able to solve because he'd try the thing that seemed irrational.

Stupidity is the lack of mental horsepower. A stupid person has a weak or inefficient "cognitive CPU".

Craziness is when the output of the "program" doesn't correlate reliably with reality due to bugs in the "source code". A crazy person has a flawed "cognitive algorithm".

It seems that in humans, source code can be revised to a certain degree, but processing power is difficult (though not impossible) to upgrade.

So calling someone crazy (for the time being) is certainly different from calling someone stupid.

6ABranco
Excellent distinction, Yasser. I would add one more case: Wrongness is when the output of the "program" doesn't correlate reliably with reality. But this could happen not only because the algorithm is flawed (wrong because crazy), but also because of insufficient or incorrect input. I think this is an important distinction, because the person can be smart (non-stupid) and rational (non-irrational = non-crazy) but still wrong nevertheless — and those around would call him "crazy" or "stupid" undeservedly. Example: CEOs taking calculated risks but being fired because the company, guided by him, flipped the coin and got head instead of the desired tail. Stakeholders expected him to be omniscient. Those CEOs who get it right will be perceived as omniscient gurus. Hindsight bias will make them write books on how to be successful; survivorship bias will lure people into buying them. Not being crazy makes your output less wrong. But doesn't guarantee it to be right, either. If I didn't get it wrong in my analysis above (puns intended), would it be fair to say that this community, having the mission to fix the biases in our algorithms, should be even more appropriately called Less Crazy instead?
1Christian_Szegedy
I am not sure this type of "craziness" itself is always a bug. Irrational beliefs and behaviors often have perfectly rational explanations that make sense from a mental health point of view: humans are more emotional then logical creatures. Internally coping with (often unconscious) emotional problems can be a higher priority personal task than correlating with reality in every possible respect.
-2jastreich
To use your analogy. Any person who doesn't provide the expected output is often deemed crazy... It doesn't mean that there is a bug in the person, perhaps sometimes it's a bug in reality. I've talked to a number of people who most would call crazy (none of them went to the mad house -- at least that I know of). When you begin to look at things from their perspective you sometimes find that they see patterns others are missing; but lack the social graces and unique way or inability to relate those patterns to others is lost. On the other hand, I think that we are all "crazy" and "stupid" in our own ways. I think there are really extreme cases of both.
3Eliezer Yudkowsky
Also yup.

Your utility estimates at any given time should already take into account all of the data available to you at that time, including your previous estimates.

In other words, if you decide you don't want to go to a movie you've already purchased a ticket for, that decision has already been influenced by the knowledge that you did want to go to the movie at some point, so there's no reason to slide your estimate again.

0DanielLC
It should have been, but you get pretty much the same result from the sunk cost fallacy, so people do that instead. We didn't evolve to be rational. We evolved to succeed.

Stop equating skills with intelligence.

0cousin_it
If I replace "intelligence" with "skills", the point still stands.

I live in San Clemente, but I'd be willing to drive anywhere in Orange County for an occasional meetup.

I chose to believe in the existence of God - deliberately and consciously. This decision, however, has absolutely zero effect on the actual existence of God.

If you know your belief isn't correlated to reality, how can you still believe it?

To be fair, he didn't say that the actual existence of God has absolutely zero effect on his decision to believe in the existence of God.

His acknowledgement that the map has no effect on the territory is actually a step in the right direction, even though he has many more steps to go.

1mamert
My thoughts exactly. Seeing that statement, I must absolutely AGREE with the second part, and only politely point out that he should rephrase the first part, working "probability" and "working hypothesis" into it.

A banal one is that misinforming takes effort and not informing saves effort.

That's an important distinction. In both scenarios, the Carpenter suffers the same disutility, but the utility for Walrus is higher for "secret" than for "lies" if his utility function values saving effort. Perhaps that's the reason we don't feel morally obligated to walk the streets all day yelling correct information at people even though many of them are uninformed.

However, this rationalization breaks down in a scenario where it takes more effort to keep ... (read more)

3MasterGrape
I'm glad I took the time to read all the way to the bottom because this is exactly what I wanted to point out. If the Carpenter must act to misinform, then the Carpenter is busy. If the Carpenter can withhold information effortlessly, then he is free to do something else. The opportunity cost of lying might account for some of the pro-omission rationalization. Then again, we're not surrounded by so many opportunities in the real world, are we? But we might consider that this is specifically a conversation about revealing painful truths. Like in a world where the walrus really would be hurt if he found out pigs had wings. Let's say a person can only tell so many painful truths before they lose the ability to affect an individual (or any individual in the same network). If you go around explaining to all your friends how each of them is less than perfect, they might end your friendship (or at least ignore you). So the Carpenter might realize he wants to save his painful truth for a more important truth where the revelation will be important enough to him to offset the change in their relationship.

My hypothesis is that she simply meant, "It makes me happy to pretend that people are nicer than they really are."

I don't understand your objection to anonymous review on the basis of accountability. Doesn't "anonymous review" in this context just mean that the reviewers don't know the authors and affiliations of the papers they're reviewing? In that case, what is there to be accountable for? The reviewers themselves aren't any more anonymous in "anonymous review" than in standard review, are they?

1billswift
Maybe I was wrong about that, but I also understood it to mean that the reviewer was also unknown to the author, even after the review. I have heard several stories (can't remember the sources; possibly only urban-scientific legends) of reviewers giving poor reviews of work that could have pre-empted things they were currently working on. And similar self-serving tactics.
4Kaj_Sotala
In this context, yes, that's the only thing it means.

For simplicity, Occam's razor is often cited as "choose the simplest hypothesis" even when it's more appropriate to employ its original definition as the principle that one should favor the explanation that requires the fewest assumptions.

I agree that less_schlong shouldn't be citing Occam's razor as some fundamental law of the universe, but I do think it's obvious that all things being equal, we should attempt to minimize speculative assumptions.