Sequences

Re-reading Rationality From AI To Zombies
Reflections on Premium Poker Tools

Comments

Kudos for writing this post. I know it's promotional/self-interested, but I think that's fine. It's also pro-social. Having the rule/norm to encourage this type of post seems unlikely to be abused in a net-negative sort of way (assuming some reasonable restrictions are in place).

What are your goals? Money? Impact? Meaning? To what extent?

I think it'd also be helpful to elaborate on your skillset. Front end? Back end? Game design? Mobile apps? Design? Product? Data science?

I'll provide a dissenting perspective here. I actually came away from reading this feeling like Metz' position is maybe fine.

Everybody saw it. This is an influential person. That means he's worth writing about. And so once that's the case, then you withhold facts if there is a really good reason to withhold facts. If someone is in a war zone, if someone is really in danger, we take this seriously.

It sounds like he's saying that the Times' policy is that you only withhold facts if there's a "really" good reason to do so. I'm not sure what type of magnitude "really" implies, but I could see the amount of harm at play here falling well below it. If so, then Metz is in a position where his employer has a clear policy and doing his job involves following that policy.

As a separate question, we can ask whether "only withhold facts in warzone-type scenarios" is a good policy. I lean moderately strongly away from thinking it's a good policy. It seems to me that you can apply some judgement and be more selective than that.

However, I have a hard time moving from "moderately strongly" to "very strongly". To make that move, I'd need to know more about the pros and cons at play here, and I just don't have that good an understanding. Maybe it's a "customer support reads off a script" type of situation. Let the employee use their judgement; most of the time it'll probably be fine; once in a while they do something dumb enough to make it not worth letting them use their judgement. Or maybe journalists won't be dumb if they are able to use judgement here, but maybe they'll use that power to do bad things.

I dunno. Just thinking out loud.

Circling back around, suppose hypothetically we assume that the Times does have a "only withhold facts in a warzone-type scenario" policy, that we know that this is a bad and overall pretty harmful policy, and that Metz understands and agrees with all of this. What should Metz do in this hypothetical situation?

I feel unclear here. On the one hand, it's icky to be a part of something unethical and harmful like that, and if it were me I wouldn't want to live my life like that, so I'd want to quit my job and do something else. But on the other hand, there's various personal reasons why quitting your job might be tough. It's also possible that he should take a loss here with the doxing so that he is in position to do some sort of altruistic thing.

Probably not. He's probably in the wrong in this hypothetical situation if he goes along with the bad policy. I'm just not totally sure.

I strongly suspect that spending time building features for rate limited users is not valuable enough to be worthwhile. I suspect this mainly because:

  1. There aren't a lot of rate limited users who would benefit from it.
  2. The value that the rate limited users receive is marginal.
  3. It's unclear whether doing things that benefit users who have been rate limited is a good thing.
  4. I don't see any sorts of second order effects that would make it worthwhile, such as non-rate-limited people seeing these features and being more inclined to be involved in the community because of them.
  5. There are lots of other very valuable things the team could be working on.
Reply1111

Hm, good points.

I didn't mean to propose the difficulty frame as the answer to what complexity is really about. Although I'm realizing now that I kinda wrote it in a way that implied that.

I think what I'm going for is that "theorizing about theorizers" seems to be pointing at something more akin to difficulty than truly caring about whether the collection of parts theorizes. But I expect that if you poke at the difficulty frame you'll come across issues (like you have begun to see).

I actually never really understood More Dakka until listening to the song!

I spent a bit of time reading the first few chapters of Complexity: A Guided Tour. The author (also at the Santa Fe institute) claimed that, basically, everyone has their own definition of what "complexity" is, the definitions aren't even all that similar, and the field of complexity science struggles because of this.

However, she also noted that it's nothing to be (too?) ashamed of: other fields have been in similar positions, have come out ok, and that we shouldn't rush to "pick a definition and move on".

We have to theorize about theorizers and that makes all the difference.

That doesn't really seem to me to hit the nail on the head.

I get the idea of how in physics, if billiards balls could think and decide what to do it'd be much tougher to predict what will happen. You'd have to think about what they will think.

On the other hand, if a human does something to another human, that's exactly the situation we're in: to predict what the second human will do we need to think about what the second human is thinking. Which can be difficult.

Let's abstract this out. Instead of billiards balls and humans we have parts. Well, really we have collections of parts. A billiard ball isn't one part, it consists of many atoms. Many other parts. So the question is of what one collection of parts will do after it is influenced by some other collection of parts.

If the system of parts can think and act, it makes it difficult to predict what it will do, but that's not the only thing that can make it difficult. It sounds to me like difficulty is the essence here, not necessarily thinking.

For example, in physics suppose you have one fluid that comes into contact with another fluid. It can be difficult to predict whether things like eddies or vortices will form. And this happens despite the fact that there is no "theorizing about theorizers".

Another example: if is often actually quite easy to predict what a human will do even though that involves theorizing about a theorizer. For example, if Employer stopped paying John Doe his salary, I'd have an easy time predicting that John Doe would quit.

The subtext here seems to be that such references are required. I disagree that it should be.

It is frequently helpful but also often a pain to dig up, so there are tradeoffs at play. For this post, I think it was fine to omit references. I don't think the references would add much value for most readers and I suspect Romeo wouldn't have found it worthwhile to post if he had to dig up all of the references before being able to post.

Ah yeah, that makes sense. I guess utility isn't really the right term to use here.

Yeah, I echo this.

I've gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?

On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they'll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).

I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they'll act extremely selfishly.

The way that I like to think about this is in terms of "moral weight". How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with "moral weights" to assign to different types of people. But I think that people don't really assign a moral weight and then act consistently. In some situations they'll act as if their answer to my previous question is 100,000, and in other situations they'll act like it's 0.00001.

Load More