ShowMeTheProbability

Wiki Contributions

Comments

Sorted by

I often share the feeling you have, I believe that it's best characterised as 'fear/terror/panic' of the unknown.

Some undefined stuff is going to happen which may be scary, but there's no reason to think it will specifically be death rather than something else.

Great post, I loved the comprehensive breakdown and feel much more up to date

Thanks!

The Sam Bankman Fried reads differently now his massive fraud with FTX is public, might be worth a comment/revision?

I can't help but see Sam disagreeing with a message as a positive for the message (I know it's a fallacy, but the feelings still there)

From my perspective, you nailed the emotional vibe dead on. Its what I wouldve needed to hear (if I had the mental resources to process the warning properly before having a breakdown)

Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.

To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.

Assertion: An Ideal Agent never pays people to lie to them.

 

What if an agent has built a lie-detector and wants to test it out? I expect thats a circumstance where you want somone to lie to you consistently and on demand.

Whats the core real-world situation you are trying to address here?

Thanks for the feedback!

I'll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.

The lack of falsification criteria for AGI (unresearched rant)

Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.

Problem:

  • Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
  • Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).

Solution:

  • A robust and scalable test of abstract cognitive ability.
  • A test that could be passed by a friendly AI in such a way as to communicate co-operative intent, without all the humans freaking out.

Would anyone be interested in such a test so that we can detect the subject of our study?

Load More