Would a FAI reward us for helping create it?
We expect that post-singularity there will still be limited resources in the form of available computational resources until heat death.
Those resources do not necessarily need to be allocated fairly. In fact, I would guess that if they were allocated unfairly the most like beneficiaries would be those people that helped contribute to the creation of a friendly AI.
Now for some open questions:
What probability distribution of extra resources do you expect with respect to various possible contributions to the creation of friendly AI?
Would donating to the SIAI suffice for acquiring these extra resources?
Practicing what you preach
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
- It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near the optimal task.
- It will be something you can start working on right now, immediately.
- It will disregard arbitrary self-limitations like abstaining from politics or keeping yourself aligned with a community of family and friends.
- Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.
LessWrong gaming community
Many of us enjoy expressing ourselves through electronic games. As such, I feel that this aspect of our lives should be shared among our fellow gamers in the LessWrong community.
Video games are a great way to reduce compartmentalization and learn real-world rationality skills. Indeed, what brings us together at LessWrong can often be our love of games; someone in the LessWrong community without this advantage might find learning rationality difficult. In this light, outreach into the transhumanist/rationalist community to promote gaming is low-hanging fruit for serving the future of humanity.
Please consider this post a unique opportunity to begin discussion of this important issue and facilitate further debate in the near future.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)