Posts

Sorted by New

Wiki Contributions

Comments

Questions are not a problem, obligation to answer is a problem.

I think if any interaction becomes cheap enough, it can be a problem.

Let's say I want to respond to ~ 5 to 10 high-effort questions (questions where the askers have done background research and spend some time checking their wording so it's easy to understand), and I receive 8 high-effort questions and 4 low-effort questions, then that's fine- it's not hard to read them all and determine which ones I want to respond to.

But what about if I receive 10 high-effort questions, and 1000 low-effort questions? then the low-effort questions are imposing a significant cost on me, purely because I have to spend effort to filter them out to reach the ones I want to respond to.

My desire to participate in answering questions, coupled with an incredibly cheap question-asking process, is sufficient to impose high costs on me (if I set up some kind of automated spam filter, this is also a cost, and leads to the kind of spam filter/automated email arms race that we currently see, with each automated system trying to outsmart the other).

I think it might be a good idea to classify a "successful" double crux as being a double crux where both participants agree on the truth of the matter at the end, or at least have shifted their world views to be significantly more coherent.

It seems like the main obstacles to successful double crux are emotional (pride, embarrassment), and associations with debates, which threaten to turn the format into a dominance contest.

It might help to start with a public and joint announcement by both participants that they intend to work together to discover the truth, recognising that their currently differing models means that at least one of them has the opportunity to grow in their understanding of the world and become a stronger rationalist, that they are committed to helping each other become stronger in the art.

Alternatively you could have the participants do the double crux in their own time, and in private (though recorded). If the double crux succeeds, then post it, and major kudos to the participants. If it fails, then simply post the fact that the crux failed but don't post the content. If this format is used regularly, eventually it may become clear which participants consistently succeed in their double crux attempts, and which don't, and they can build reputation that way, rather than trying to "win" a debate.

I wasn't able to find the full video on the site you linked, but I found it here, if anyone else has the same issue: 

Domain: PCB Design, Electronics
Link: https://www.youtube.com/watch?v=ySuUZEjARPY
Person: Rick Hartley
Background: Has worked in electronics since the 60s, senior principal engineer at L-3 Avionics Systems, principal of RHartley Enterprises
Why: Rick Hartley is capable of explaining electrical concepts intuitively, and linking them directly to circuit design. He uses a lot of stories and examples visually to describe what's happening in a circuit. I'm not sure it counts as Tacit Knowledge since this is lecture format, but it includes a bunch of things that you might not know you don't know, coming into the field. I never "got" how electrical circuits really work before watching this video, despite having been a hobbyist for years.

In terms of my usage of the site, I think you made the right call. I liked the feature when listening but I wanted to get rid of it afterwards and found it frustrating that it was stuck there. Perhaps something hidden on a settings page would be appropriate, but I don't think it's needed as a default part of the site right now.

I'm glad you like it! I was listening to it for a while before I started reading lesswrong and AI risk content, and then one day I was listening to "Monster" and started paying attention to the lyrics and realised it was on the same topic. 

whestler112

It isn't quite the same but the musician "Big Data" has made some fantastic songs about AI risk. 

I realise this is a few months old but personally my vision for utopia looks something like the Culture in the Culture novels by Iain M. Banks. There's a high degree of individual autonomy and people create their own societies organically according to their needs and values. They still have interpersonal struggles and personal danger (if that's the life they want to lead) but in general if they are uncomfortable with their situation they have the option to change it. AI agents are common, but most are limited to approximately human level or below. Some superhuman AI's exist but they are normally involved in larger civilisational manouvering rather than the nitty gritty of individual human lives. I recommend reading it. 

Caveats-

1: yes, this is a fictional example so I'm definitely in danger of generalising from fictional evidence. I mostly think about it as a broad template or cluster of attributes society might potentially be able to achieve.

2: I don't think this level of "good" AI is likely.

I had a similar emotional response to seeing these same events play out. The difference for me is that I'm not particularly smart or qualified, so I have an (even) smaller hope of influencing AI outcomes, plus I don't know anyone in real life who takes my concerns seriously. They take me seriously, but aren't particularly worried about AI doom. It's difficult to live in a world where people around you act like there's no danger, assuming that their lives will follow a similar trajectory to their parents. I often find myself slipping into the same mode of thought.