Comment author: StefanPernar 09 October 2013 08:50:37PM *  -2 points [-]

I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:

Relativistic irrationality -> http://www.jame5.com/?p=15

Absolute irrationality -> http://www.jame5.com/?p=45

Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8

Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10

Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11

A more recent rewrite: Oneness – an attempt at formulating an a priori argument -> http://rationalmorality.info/?p=328

Rational Spirituality -> http://rationalmorality.info/?p=132

My essay that I based on the above post and subsequently submitted as part of my GradDip Art in Anthropology and Social Theory at the Uni Melbourne:

The Logic of Spiritual Evolution -> http://rationalmorality.info/?p=341

Comment author: StefanPernar 09 October 2013 11:51:38PM 1 point [-]

Why am I being downvoted?

Sorry for the double post.

Comment author: StefanPernar 09 October 2013 08:50:37PM *  -2 points [-]

I have written about this exact concept back in 2007 and am basing a large part of my current thinking on the subsequent development of the idea. The original core posts are at:

Relativistic irrationality -> http://www.jame5.com/?p=15

Absolute irrationality -> http://www.jame5.com/?p=45

Respect as basis for interaction with other agents -> http://rationalmorality.info/?p=8

Compassion as rationaly moral consequence -> http://rationalmorality.info/?p=10

Obligation for maintaining diplomatic relations -> http://rationalmorality.info/?p=11

A more recent rewrite: Oneness – an attempt at formulating an a priori argument -> http://rationalmorality.info/?p=328

Rational Spirituality -> http://rationalmorality.info/?p=132

My essay that I based on the above post and subsequently submitted as part of my GradDip Art in Anthropology and Social Theory at the Uni Melbourne:

The Logic of Spiritual Evolution -> http://rationalmorality.info/?p=341

Comment author: wedrifid 25 November 2009 08:47:15AM 2 points [-]

Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).

Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.

Comment author: StefanPernar 25 November 2009 11:13:16AM *  4 points [-]

Really? I thought it consisted mostly of elites retorting straw men and ignoring any strong arguments of those lower in status until such time as they died or retired. The lower status engage in sound arguments while biding their time till it is their chance to do the ignoring and in so doing iterate the level of ignorance one generation forward.

You will find that this is pretty much what Kuhn says.

Comment author: StefanPernar 25 November 2009 08:36:47AM 0 points [-]

Brilliant post Wei.

Historical examination of scientific progress is much less of a gradual ascent towards a better understanding upon the presentation of a superior argument (Karl Popper's Logic of Scientific Discovery) but much more a irrational insistence on a set of assumptions as unquestionable dogma until the dam finally burst under the enormous pressures that kept building (Thomas Kuhn's Structure of Scientific Revolutions).

Comment author: AnnaSalamon 19 November 2009 06:57:55AM *  28 points [-]

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:

Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):

  • “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  • “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  • “Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  • “Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  • “Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
  • And several more at various stages of the writing process, including some journal papers.

The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)

The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)

Miscellaneous additional examples:

  • The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
  • A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
  • Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
  • A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
  • Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.

(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)

(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)

How efficiently can we turn a marginal $1000 into more rapid project-completion?

As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.

As to SIAI vs. SENS:

SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.

The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)

There’s a lot more to say on all of these points, but I’m trying to be brief -- if you want more info on a specific point, let me know which.

It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)

Comment author: StefanPernar 24 November 2009 11:44:10AM 1 point [-]

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

Comment author: kurige 18 November 2009 06:19:09AM *  2 points [-]

1) You can summarize arguments voiced by EY.
2) You cannot write a book that will be published under EY's name.
3) Writing a book takes a great deal of time and effort.

You're reading into connotation a bit too much.

Comment author: StefanPernar 18 November 2009 06:45:02AM 0 points [-]

2) You cannot write a book that will be published under EY's name.

Its called ghost writing :-) but then again the true value add lies in the work and not in the identity of the author. (discarding marketing value in the case of celebrities)

Your reading into connotation a bit too much.

I do not think so - am just being German :-) about it: very precise and thorough.

Comment author: Eliezer_Yudkowsky 18 November 2009 04:29:07AM 7 points [-]

In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.

(As of this comment being typed, I'm working on a rationality book. This is not something that anyone else can do for me.)

Comment author: StefanPernar 18 November 2009 05:08:09AM 0 points [-]

In general: Because my time can be used to do other things which your time cannot be used to do; we are not fungible.

This statement is based on three assumptions: 1) What you are doing instead is in fact more worthy of your attention than your contribution here 2) I could not do what you are doing as least as well as you 3) I do not have other things to do that are at least as worthy of my time

None of those three I am personally willing to grant at this point. But surely that is not the case for all the others around here.

Comment author: wedrifid 17 November 2009 11:15:21AM 0 points [-]

Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)

Don't forget the Y2K doomsday folks! ;)

Evolution is a force of nature so we won't be able to ignore it forever, with or without AGI. I am not talking about local minima either - I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

Comment author: StefanPernar 18 November 2009 04:57:39AM 0 points [-]

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.

Comment author: wedrifid 18 November 2009 02:51:23AM 1 point [-]

You seem willing to come back and make about any random comment in an effort to have the last word and that is what I am willing to give to you.

My 'last word' was here. It is an amicable hat tip and expansion on a reasonable perspective that you provide. How much FAI thinking sounds like a "Rapture of the Nerds". It also acknowledges our difference in perspective. While we both imagine evolutionary selection pressures as a 'force', you see it as one to be embraced and defined by while I see it as one that must be mastered or else.

We're not going to come closer to agreement than that because we have a fundamentally different moral philosophy which gives us different perspectives on the whole field.

Comment author: StefanPernar 18 November 2009 04:44:50AM 0 points [-]

My apologies for failing to see that - did not mean to be antagonizing - just trying to be honest and forthright about my state of mind :-)

Comment author: StefanPernar 18 November 2009 04:14:19AM 0 points [-]

More recent criticism comes from Mike Treder - managing director of the Institute for Ethics and Emerging Technologies in his article "Fearing the Wrong Monsters" => http://ieet.org/index.php/IEET/more/treder20091031/

View more: Next