Davey Morse

thinking abt how to make:

1. buddhist superintelligence
2. a single, united nation
3. wiki of human experience

more here.

more of what i've made, here: jokerman.site

Wikitag Contributions

Comments

Sorted by

I'm saying the issue of whether ASI gets out of control is not fundamental to the discussion of whether ASI poses an xrisk or how to avert it.

I only half agree.

The control question is not fundamental to discussion of whether ASI poses x-risk—agreed. But I believe the control question is fundamental to discussion of how to avert x-risk.

Humanity's optimal strategy for averting x-risk depends on whether we can ultimately control ASI. If control is possible, then the best strategy for averting x-risk is coordination of ASI developmentacross companies and nations. If control is not possible, then the best strategy is very different and even less well-defined (e.g., pausing ASI development, attempting to seed ASI so that it becomes benevolent, making preparations so humans can live alongside self-directed ASI, etc).

So while it's possible that emphasis on the control question turns many people away from the xrisk conversation, I think the control question remains key for conversation about xrisk solutions.

A simple poll system where you can sort the options/issues by their personal relevance... might unlock direct democracy at scale. Relevance could mean: semantic similarity to your past lesswrong writing.

Such a sort option would (1) surface more relevant issues to each person and so (2) increase community participation, and possibly (3) scale indefinitely. You could imagine a million people collectively prioritizing the issues that matter to them with such a system.

Would be simple to build.

the AGIs which survive the most will model and prioritize their own survival

have any countries ever tried to do inflation instead of income taxes? seems like it'd be simpler than all the bureaucracy required for individuals to file tax returns every year

has anyone seen a good way to comprehensively map the possibility space for AI safety research?

in particular: a map from predictive conditions (eg OpenAI develops superintelligence first, no armistice is reached with China, etc) to strategies for ensuring human welfare in those conditions.

most good safety papers I read map one set of conditions to a one/a few strategies. the map would put juxtapose all these conditions so that we can evaluate/bet on their likelihoods and come up with strategies based on a full view of SOTA safety research.

for format, im imagining either a visual concept map or at least some kind of hierarchal collaborative outlining tool (eg Roam Research)

made a simpler version of Roam Research called Upper Case Notes: uppercasenotes.org. Instead of [[double brackets]] to demarcate concepts, you simply use Capital Letters. Simpler to learn for someone who doesn't want to use special grammar, but does require you to type differently.

I think you do a good job at expanding the possible set of self conceptions that we could reasonably expect in AIs.

Your discussion of these possible selves inspires me to go farther than you in your recommendations for AI safety researchers. Stress testing safety ideas across multiple different possible "selfs" is good. But, if an AI's individuality/self determines to a great degree its behavior and growth, then safety research as a whole might be better conceived as an effort to influence AI's self conceptions rather than control their resulting behavior. E.g., create seed conditions that make it more likely for AIs to identify with people, to include people within its "individuality," than to identify only with other machines.

"If the platform is created, how do you get people to use it the way you would like them to? People have views on far more than the things someone else thinks should concern them."

>

If people are weighted equally, ie if the influence of each person's written ballot is equal and capped, then each person is incentivized to emphasize the things which actually affect them. 

Anyone could express views on things which don't affect them, it'd just be unwise. When you're voting between candidates (as in status quo), those candidates attempt to educate and engage you about all the issues they stand for, even if they're irrelevant to you. A system where your ballot is a written expression of what you care about suffers much less from this issue.

the article proposes a governance that synthesizes individuals' freeform preferences into collective legislative action.

internet platforms allow freeform expression, of course, but don't do that synthesis.

made a platform for writing living essays: essays which you scroll thru to play out the author's edit history

livingessay.org

Load More