All of ShowMeTheProbability's Comments + Replies

I often share the feeling you have, I believe that it's best characterised as 'fear/terror/panic' of the unknown.

Some undefined stuff is going to happen which may be scary, but there's no reason to think it will specifically be death rather than something else.

-1andrew sauer
Well, given that death is one of the least bad options here, that is hardly reassuring...

Great post, I loved the comprehensive breakdown and feel much more up to date

Thanks!

The Sam Bankman Fried reads differently now his massive fraud with FTX is public, might be worth a comment/revision?

I can't help but see Sam disagreeing with a message as a positive for the message (I know it's a fallacy, but the feelings still there)

3habryka
Hmm, I feel like the revision would have to be in Scott's comment. I was just responding to the names that Scott mentioned, and I think everything I am saying here is still accurate.

From my perspective, you nailed the emotional vibe dead on. Its what I wouldve needed to hear (if I had the mental resources to process the warning properly before having a breakdown)

3Valentine
Good to know. Thanks for saying.

Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.

To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.

Assertion: An Ideal Agent never pays people to lie to them.

 

What if an agent has built a lie-detector and wants to test it out? I expect thats a circumstance where you want somone to lie to you consistently and on demand.

Whats the core real-world situation you are trying to address here?

4gwern
I can think instantly of at least two useful cases where a fully rational intelligent person fully informed of the situation and premeditating it, would nevertheless still want to pay people to lie to them; and not in any tendentious meaning of 'lie' ("you pay artists to lie to you!"), but full outright deception in causing you to believe false facts about them, which you will then always believe*: pentesting and security testing where they deceive you into thinking they're authorized personnel etc, and 'randomized response technique' survey techniques on dangerous questions where a fraction of respondents are directed to eg flip a coin & lie to you in their response so you have false beliefs about each subject but can form a truthful aggregate. * the pen testers might tell you their real names in the debrief, but don't have to and might not bother since it doesn't matter and you have bigger fish to fry; the survey-takers obviously never will. In neither case do you necessarily ever find out the truth, nor do you need to in order to benefit from the lies.
4eva_
Not sure what's unclear here? I mean that you'd generally prefer not to have incentive structures where you need true information from other people and they can benefit at your loss by giving you false information. Paying someone to lie to you means creating an incentive for them to actually decieve you, not merely giving them money to speak falsehoods.

Thanks for the feedback!

I'll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.

The lack of falsification criteria for AGI (unresearched rant)

Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.

Problem:

  • Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
  • Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).

Solution:

  • A robust and scalable test of abstract cognitive ability.
  • A t
... (read more)
3Raemon
Becoming capable of building such a test is essentially the entire field of AI alignment. (yes, we don't have the ability to build such a test and that's bad, but the difficulty lives in the territory. MIRI's previously stated goal were specifically to become less confused)