Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
pseud20

I agree there's nothing about consciousness specifically, but it's quite different to the hidden prompt used for GPT-4 Turbo in ways which are relevant. Claude is told to act like a person, GPT is told that it's a large language model. But I do now agree that there's more to it than that (i.e., RLHF).

pseud50

It's possibly just matter of how it's prompted (the hidden system prompt). I've seen similar responses from GPT-4 based chatbots. 

pseud63

The cited markets often don't support the associated claim. 

pseud00

"This question will resolve in the negative to the dollar amount awarded"

This is a clear, unambiguous statement.

If we can't agree even on that, we have little hope of reaching any kind of satisfying conclusion here.

Further, if you're going to accuse me of making things up (I think this is, in this case, a violation of the sensible frontpage commenting guideline "If you disagree, try getting curious about what your partner is thinking") then I doubt it's worth it to continue this conversation.

pseud0-4

Metaculus questions have a good track record of being resolved in a fair matter.

Do they? My experience has been the opposite. E.g. admins resolved "[Short Fuse] How much money will be awarded to Johnny Depp in his defamation suit against his ex-wife Amber Heard?" in an absurd manner* and refused to correct it when I followed up on it.

*they resolved it to something other than the amount awarded to Depp despite thatamount being the answer to the question and the correct resolution according to the resolution criteria

pseud-10

My comment wasn't well written, I shouldn't have used the word "complaining" in reference to what Said was doing. To clarify:

As I see it, there are two separate claims:

  1. That the complaints prove that Said has misbehaved (at least a little bit)
  2. That the complaints increase the probability that Said has misbehaved 

Said was just asking questions - but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1. 

Jefftk seems to be speaking about claim 2. So, his comment doesn't seem like a direct response to Said's comment, although the point is still a relevant one. 

pseud9-2

It didn't seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.

pseud54

It's probably worth noting that Yudkowsky did not really make the argument for AI risk in his article. He says that AI will literally kill everyone on Earth, and he gives an example of how it might do so, but he doesn't present a compelling argument for why it would.[0] He does not even mention orthogonality or instrumental convergence. I find it hard to blame these various internet figures who were unconvinced about AI risk upon reading the article.

[0] He does quote “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

pseud20

I'd prefer my comments to be judged simply by their content rather than have people's interpretation coloured by some badge. Presumably, the change is a part of trying to avoid death-by-pacifism, during an influx of users post-ChatGPT. I don't disagree with the motivation behind the change, I just dislike the change itself. I don't like being a second-class citizen. It's unfun. Karma is fun, "this user is below an arbitrary karma threshold" badges are not. 

A badge placed on all new users for a set time would be fair. A badge placed on users with more than a certain amount of Karma could be fun. Current badge seems unfun - but perhaps I'm alone in thinking this. 

Load More