Wiki Contributions

Comments

Sorted by
Jiro20

There are set ups where each agent is using an nonphysically large but finite amount of compute.

In a situation where you are asking a question about an ideal reasoner, having the agents be finite means you are no longer asking it about an ideal reasoner. If you put an ideal reasoner in a Newcomb problem, he may very well think "I'll simulate Omega and act according to what I find". (Or more likely, some more complicated algorithm that indirectly amounts to that.) If the agent can't do this, he may not be able to solve the problem. Of course, real humans can't, but this may just mean that real humans are, because they are finite, unable to solve some problems.

JiroΩ160

I get the impression that "has the agent's source code" is some Yudkowskyism which people use without thinking.

Every time someone says that, I always wonder "are you claiming that the agent that reads the source code is able to solve the Halting Problem?"

Jiro42

You cannot spend anything on yourself, you must give all your money away, and live in a tiny cottage eating bland, cheap foods.

But effective altruism doesn’t say that. If it did, that would be pretty weird, as no EAs do that.

My objection to EA along these grounds is not that EA says that, it's that the principles of EA imply that. While it's true that EA doesn't say that, it seems to be making unprincipled exceptions when doing so.

No one—and I seriously mean no one—is suggesting that you should become a mafia boss so that you can donate the money.

This also falls under "EA may not say that, but it doesn't seem to have a principled reason why it doesn't imply that". It's entirely possible that you destroy less utility by doing Mafia-type things than you gain by donating what you get from your Mafia activities.

Here it seems like you should save the child.

But now imagine that rather than the child being near, they’re far away.

My answer to this is that in any drowning child situation that is realistic enough that our intuitions apply, the fact that the child is near places limits on how much we have to spend, and the fact that there are limits to helping people near us is one of the reasons why helping people near us is a reasonable policy.

And it doesn’t seem like the fact that they’re our countrymen is itself relevant, because then it would be more important to help a person after they’ve immigrated here than before they have.

If you believe there should be limits on immigration, and that there's no need to help people who have immigrated in a way bypassing those limits, this is not a problem. This seems to be circular reasoning: EA ideas of helping everyone equally are used to justify open borders in the first place, and then EA says stuff like this which tries to bootstrap open borders into a justification for EA.

Jiro62

One way to see how good different charities are is to imagine that after you died, you had to live the life of every creature on earth.

This implies that we should eradicate as many such species as we can, because creatures that don't exist don't count for this.

Jiro40

My own heresy is that I don't have a true rejection. Many ideas are things which I believe by accumulation of evidence and there's no single item which would disprove my position. And talking about a "true rejection" is really trying to create a gotcha to force someone to change their position without allowing for things such as accumulation of evidence or even misphrasing the rejection.

I also think rationalists shouldn't bet, but that probably deserves its own post.

Jiro1612

This is the core reason why it is so difficult for ordinary people to pay their bills or raise families, despite earnings that would make them rich elsewhere or elsewhen. These productive actions are severely restricted, because if you are going to be productive then you have to do so ‘correctly’ and obey all sorts of rules and requirements.

There are plenty of good things that aren't restricted and bad things that are. But elites are human and aren't going to get it right every time, and you'll notice most the cases where they got it wrong.

Jiro20

No, because I have no way to improve my ability to see loopholes and flaws, so there's always going to be residual uncertainty that can't be reduced. Risk aversion does the rest.

Jiro61

Sorry, risk aversion.

Also, the usual situation of "if I think the main proposition is unlikely, bad outcomes will be dominated by cases where I miss loopholes in the bet or otherwise lose the bet for reasons unrelated to the truth of the proposition".

Jiro50
  • 1 comment / hour
  • i.e slow down a little, you may be getting into a heated exchange.

I just got screwed over by this,  It strikes me as insane to be rate-limited to "slow down" and not get into a "heated exchange" when the limit is done by checking the user's karma for the last 20 comments from any time period.  20 comments from me go back months,  I can't get all that much slower.  And 3 downvoters and -1 is an awfully small limit, and easy to hit just out of bad luck.

Also, if you post from greaterwrong, and you hit the limit, your comment and the effort spent writing it is just lost.

Jiro20

But some of the writing in those old sacred texts is actually really good.

The Halo Effect (pun not intended) is very strong here.

I seldom see people who say things like this also say that the writing in the Koran is actually really good, for some reason. And not because there's some difference in how good the writing in the Koran and in the Bible is.

Load More