Posts

Sorted by New

Wiki Contributions

Comments

A similar model is a "refugee city", using a charter city model.  The Charter Cities Institute has talked about this, but rather than vacating developed country land it's about an intermediary (developing country) host vacating land and someone else (e.g. CCI) providing the governance and security.

So the one that still stands is confirmation bias?

For a concrete example, in our county, there is a group called CHIPs that does wood fuel clearing on private land and uses it to create syngas and biochar.  I need to check on their cost per acre for tree and brush clearing next week, so I'll report back.

Well, you could if you weren't the US Forest Service or Bureau of Land Management.  [note: please dock 2 points for snark].  I've spent maybe 10 min on this topic over the last year, but I do have property next to federally owned land in the CA Sierras. My understanding is that getting Forest Service to allow more logging is really hard, and that letting more logging in CA get through legal challenges and local rules is even harder.   And selling some of the random patches of BLM land off never happens.  This source is visibly and emotionally biased, but in an article about a new bill it has some info about past failed attempts to increase logging for bark beetle infested areas.  FYI, beetle kill wood is still usable for lumber within 5 years, but not afterwards.
 

I am curious about the difference between "having a good, deep conversation" and "coming to a conclusion".    I have a feeling that these two goals at odds.  Polis sounds like it's targeted at the second goal - but for the owner of the conversation to come to a conclusion, not the participants collectively.

I am seeing the need for better question design in all the collaborative discussion tools I'm looking at, but that kinda (ahem) begs the question.  What is the process that gets to well-designed questions?

I would like to vote up this recommendation:

How about a technological solution for representing arguments with clarity so that both sides:

  • can see what is being said in clearly labeled propositions.
  • can identify errors in logic and mark them down.
  • can weed out opinions from experimentally confirmed scientific facts.
  • can link to sources and have a way to recursively examine their 'truth rating’ down to the most primary source.

This is an un-explored area, and seems to me like it would have a higher ROI than a deep dive into variations on voting/rating/reputation systems.

I see various people volunteering for different roles. I'd be interested in providing design research and user experience support, which would probably only be needed intermittently if we have someone acting as a product manager. It might be nice to have someone in a light-weight graphic design role as well, and that can be freelance.

Like ananda, I'm happy to do this as an open-contribution project rather than paid. I'll reach out to Vaniver via email.

I can see value in having LW as a prototype or scratch pad, making simple modifications of existing discussion platforms (e.g. improved moderator powers as discussed above). Then Arbital can do the harder work of building a collaborative truth-seeking platform, adding in features to, for example, support Double Crux, fine-typed voting, or evidence (rather than comments).

Perhaps in the end there's a symbiosis, where the LW is for discussion, and when a topic comes up that needs truth-seeking it's moved to Arbital. That free's Arbital from having to include a solved problem in it's code base.

Nice association.

I see this model as building on Laddering or the XY problem, because it also looks for a method of falsifiability.

It's closer to a two-sided use of Eric Ries' Lean Startup (the more scientific version), where a crux = leap of faith assumption. I've called the LoFA a "leap of faith hypothesis", and your goal is to find the data that would tell you the assumption is wrong.

The other product design thinker with a similar approach is Tom Chi who uses a conjecture -> experiment -> actuals -> decision framework.

In all of these methods, the hard work/thinking is actually finding a crux and how to falsify it. Having an "opponent" to collaborate with may make us better at this.

Thank you! I needed that work-around.

Load More