Posts

Sorted by New

Wiki Contributions

Comments

Kei30

Also, please correct me if I am wrong, but I believe you can withdraw from a retirement account at any time as long as you are ok paying a 10% penalty on the withdrawal amount. If your employer is giving a ~>10% match, this means you'll make money even if you withdraw from the account right away.

Kei98

It also helps to dedicate a complete sentence (or multiple sentences if the action you're apologizing for wasn't just a minor mistake) to your apology. When apologizing in-person, you can also pause for a bit, giving your conversational partner the opportunity to respond if they want to.

When you immediately switch into the next topic, as in your example apology above, it looks like you're trying to distract from the fact that you were wrong, and also makes it less likely your conversational partner internalizes that you apologized.

Kei73

I think this is one reasonable interpretation of his comments. But the fact that he:

1. Didn't say very much about a solution to the problem of making models want to follow our ethical principles, and 
2. Mostly talked about model capabilities even when explicitly asked about that problem

makes me think it's not something he spends much time thinking about, and is something he doesn't think is especially important to focus on.

Kei53

I had a different interpretation of this interview. From what I can tell, Legg's view is that aligning language models is mostly a function of capability. As a result, his alignment techniques are mostly focused on getting models to understand our ethical principles, and getting models to understand whether the actions they take follow our ethical principles by using deliberation. Legg seems to view the problem of getting models to actually want to follow our ethical principles as secondary.

Dwarkesh pushed him multiple times on how we can get models to want to follow our ethical principles. Legg's responses mostly still focused on model capabilities. The closest to answering the question that he got as far as I can tell is that you have to "specify to the system: these are the ethical principles you should follow", and you have to check the reasoning process the model uses to make decisions.

KeiΩ150

It's possible I'm using motivated reasoning, but on the listed ambiguous questions in section C.3, the answers the honest model gives tend to seem right to me. As in, if I were forced to answer yes or no to those questions, I would give the same answer as the honest model the majority of the time.

So if, as is stated in section 5.5, the lie detector not only detects whether the model had lied but whether it would lie in the future, and if the various model variants have a similar intuition to me, then the honest model is giving its best guess of the correct answer, and the lying model is giving its best guess of the wrong answer.

I'd be curious if this is more generally true - if humans tend to give similar responses to the honest model for ambiguous questions.

Kei1812

While I think your overall point is very reasonable, I don't think your experiments provide much evidence for it. Stockfish generally is trained to play the best move assuming its opponent is playing best moves itself. This is a good strategy when both sides start with the same amount of pieces, but falls apart when you do odds games. 

Generally the strategy to win against a weaker opponent in odds games is to conserve material, complicate the position, and play for tricks - go for moves which may not be amazing objectively but end up winning material against a less perceptive opponent. While Stockfish is not great at this, top human chess players can be very good at it. For example, a top grandmaster Hikaru Nakamura had a "Botez Gambit Speedrun" (https://www.youtube.com/playlist?list=PL4KCWZ5Ti2H7HT0p1hXlnr9OPxi1FjyC0), where he sacrificed his queen every game and was able to get to 2500 on chess.com, the level of many chess masters. 

This isn't quite the same as your queen odds setup (it is easier), and the short time format he is on is a factor, but I assume he would be able to beat most sub-1500 FIDE players with queen odds. A version of Stockfish trained to exploit a human's subpar ability would presumably do even better.

Kei30

I wonder if this is due to a second model that checks whether the output of the main model breaks any rules. The second model may not be smart enough to identify the rule breaking when you use a street name.

Kei50

I don't know how they did it, but I played a chess game against GPT4 by saying the following:

"I'm going to play a chess game. I'll play white, and you play black. On each chat, I'll post a move for white, and you follow with the best move for black. Does that make sense?"

And then going through the moves 1-by-1 in algebraic notation.

My experience largely follows that of GoteNoSente's. I played one full game that lasted 41 moves and all of GPT4's moves were reasonable. It did make one invalid move when I forgot to include the number before my move (e.g. Ne4 instead of 12. Ne4), but it fixed it when I put in the number in advance. Also, I think it was better in the opening than in the endgame. I suspect this is probably because of the large amount of similar openings in its training data.

Kei140

Thanks for the link! I ended up looking through the data and there wasn't any clear correlation between amount of time spent in research area and p(Doom).

I ran a few averages by both time spent in research area and region of undergraduate study here: https://docs.google.com/spreadsheets/d/1Kp0cWKJt7tmRtlXbPdpirQRwILO29xqAVcpmy30C9HQ/edit#gid=583622504

For the most part, groups don't differ very much, although as might be expected, more North Americans have a high p(Doom) conditional on HLMI than other regions.

Kei20

I asked Sydney to reconstruct the board position on the 50th move of two different games, and saw what Simon predicted - a significant drop in performance. Here's a link of two games I tried using your prompt: https://imgur.com/a/ch9U6oZ

While there is some overlap, what Sydney thinks the games look like doesn't have much resemblance to the actual games.

I also repeatedly asked Sydney to continue the games using Stockfish (with a few slightly different prompts), but for some reason once the game description is long enough, Sydney refuses to do anything. It either says it can't access Stockfish, or that using Stockfish would be cheating.

Load More