Wiki Contributions

Comments

TsviBT3d40

This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions.

Well, what if there's a good piece of code (if you'll allow the crudity) in your head, and someone else's bad behavior is geared at hacking/exploiting that piece of code? The harm done is partly due to that piece of code and its role in part of your reaction to their bad behavior. But the implication is that they should stop with their bad behavior, not that you should get rid of the good code. I believe you'll respond "Ah, but you see, there's more than two options. You can change yourself in ways other than just deleting the code. You could recognize how the code is actually partly good and partly bad, and refactor it; and you could add other code to respond skillfully to their bad behavior; and you can add other code to help them correct their behavior.". Which I totally agree with, but at this point, what's being communicated by "taking self-blame" other than at best "reprogram yourself in Good/skillful ways" or more realistically "acquiesce to abuse"?

TsviBT22dΩ562

IDK if this is a crux for me thinking this is very relevant to stuff on my perspective, but:

The training procedure you propose doesn't seem to actually incentivize indifference. First, a toy model where I agree it does incentivize that:

On the first time step, the agent gets a choice: choose a number 1--N. If the agent says k, then the agent has nothing at all to do for the first k steps, after which some game G starts. (Each play of G is i.i.d., not related to k.)

So this agent is indeed incentivized to pick k uniformly at random from 1--N. Now consider:

The agent is in a rich world. There are many complex multi-step plans to incentivize agent to learn problem-solving. Each episode, at time N, the agent gets to choose: end now, or play 10 more steps.

Does this incentivize random choice at time N? No. It incentivizes the agent to choose randomly End or Continue at the very beginning of the episode, and then carefully plan and execute behavior that acheives the most reward assuming a run of length N or N+10 respectively.

Wait, but isn't this success? Didn't we make the agent have no trajectory length preference?

No. Suppose:

Same as before, but now there's a little guy standing by the End/Continue button. Sometimes he likes to press button randomly.

Do we kill the guy? Yes we certainly do, he will mess up our careful plans.

TsviBT1mo71

Bad restaurants are more likely to have open tables than good restaurants.

That seems dependent on it being difficult to scale the specific skill that went into putting together the experience at the good restaurant. Things that are more scalable, like small consumer products, can be selected to be especially good trades (the bad ones don't get popular and inexpensive).

TsviBT1mo20

Bruh. Banana Laffy Taffy is the best. Happy to trade away non-banana to receive banana, 1:1.

TsviBT3mo3-3

The point of the essay is to describe the context that would make one want a hyperphone, so that

  1. one can be motivated by the possibility of a hyperphone, and

  2. one could get a hold of the criteria that would direct developing a good hyperphone.

The phrase "the ability to branch in conversations" doesn't do either of those.

Load More