Comment author: MixedNuts 22 March 2013 10:38:15PM 6 points [-]

Failure case: They feel compelled to help, resent you for it, and destroy your reputation by speaking ill of you.

Comment author: Arran_Stirton 27 March 2013 08:57:16AM 5 points [-]

Preemptive Solution: Leave a line of retreat, make sure that there is little/no cost for them if they choose to refuse; thus reducing the likelihood that they will help you out of compulsion.

Comment author: Arran_Stirton 19 June 2012 05:01:28PM 4 points [-]

It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.

What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements,” doesn’t necessarily make it obvious enough that SI isn’t about “accelerating change”. (In fact, it sounds a lot like an accelerating-change-type idea.)

In my opinion at least, you need to get any potential supporter/critic to make the association between the name “Singularity Institute” and what SI actually does(/it’s goals) as soon as possible. While changing the name could do that, “Singularity Institute” has many useful aesthetic qualities that a replacement name probably won’t have.

On the other hand doing something like adding a clear tag-line about what SI does (e.g. “Pioneering safe-AI research”) to the header, would be a relatively cheap and effective solution. Perhaps rewriting the concise summary to discuss the dangers of a smarter-than-human AI before postulating the possibility of an intelligence explosion would also be effective; seeing as a smarter-than-human AI would need to be friendly, intelligence explosion or no.

Comment author: ciphergoth 02 June 2012 01:50:23PM 0 points [-]

Only one technical analysis of cryonics which concludes it won't work has ever been written: http://blog.ciphergoth.org/blog/2011/08/04/martinenaite-and-tavenier-cryonics/

Comment author: Arran_Stirton 06 June 2012 09:43:15AM 0 points [-]

Interesting, thanks!

Have you come across any analysis that establishes cryonics as something that prevents information-theoretic death?

Comment author: avichapman 29 May 2012 12:19:18AM 1 point [-]

Hi all,

I wrote this and then went away for a long weekend. I'm glad to see that everyone's enjoying it. After reading all of the comments, I've applied to join the Facebook group mentioned by Curiouskid.

I also agree with the suggestion that it would need to be a model of a personal Baysian Network with an attached decision theory tool to help users make logical choices about what they enter and how the assess probabilities. For example, instead of asking for a probability, it might ask how many times in a lifetime you'd expect to be wrong about something you feel this confident about. (How often are you likely to go to sleep on Tuesday and wake up on a day other than Wednesday? There's probably a lot of people who live their whole lives without that ever happening.)

Moreover, it would be web-based, with the decision theory tool making use of the database of individual networks to help form a view of communal knowledge. (This part would never be perfect, because the size of the sum of the individual networks would be so large that NP-hard considerations come into play.) It would need to be able to show its work in graphic format when one of its assessments is challenged.

I will read up on existing argument mapping tools and the deliberations of the aforementioned Facebook group. If no one else is already doing this, I think we should do it. Anyone have any knowledge on how to go about it?

I think it would be a case of having a period of discussing the problem with a growing wiki page (or something) containing information about the problem to be solved. After that, we could discuss the shape of the solution. Only after that, those with the technical knowledge could discuss the best way of actually implementing the solution. Then we could divide up the work between those with the time and appropriate skills and actually do it.

Comment author: Arran_Stirton 01 June 2012 08:31:51PM 1 point [-]

Sounds like a plan. Really what you want to do is contact everyone who's shown interest in helping you (including myself) in order to collude with them via email and then hold a discussion about how to move on at a scheduled time in an irc channel or somesuch.

Comment author: Arran_Stirton 25 May 2012 02:16:46PM 3 points [-]

At the moment I’m using yEd to create a dependency map of the Sequences, which is roughly equivalent to creating what I guess you could call an inferential network. Since embarking on this project I’ve discovered just how useful having a structured visual map can be, particularly for things like finding the weak points of various conclusions, establishing how important a particular idea is to the validity of the entire body of writing, and using how low a post is on the hierarchy as a heuristic for establishing the inferential distance to the concepts it contains.

So I’m thinking that the use of a belief network mapping tool might not necessarily be mainly in allowing updates to propagate though a personal network, but creating networks representing bodies of public knowledge. Like for example, the standard model of physics. As you can imagine this would be immensely useful for both research and education. For research such a network would point to the places where (for example) the standard model is weak, and for education it would lay out the order in which concepts should be taught in order for students to let them form an accurate internal working model without getting confused.

TL;DR: Yes, I’d love to help you design and build such a tool.

Comment author: Emile 22 May 2012 01:25:39PM 1 point [-]

For what it's worth, I had stealthily edited my question - ("If everybody had" instead of "If everybody has"); I was trying to find a short illustration of the fact that a choice with a low expected value but a high variance will be overrepresented among those who got the highest value. It seems like I failed at being concise and clear :P

Comment author: Arran_Stirton 22 May 2012 02:57:09PM 0 points [-]

Heh, well I've got dyslexia so every now and then I'll end up reading things as different to what they actually say. It's more my dyslexia than your wording. XD

It seems like I failed at being concise and clear :P

Hmm, I wonder if being concise is all it's cracked up to be. Concise messages usually have lower information content, so they're actually less useful for narrowing down an idea's location in idea-space. Thanks, I'm looking into effective communication at the moment and probably wouldn't have realized the downside to being concise if you hadn't said that.

Comment author: Emile 22 May 2012 11:36:20AM 1 point [-]

What is "Can you evidence that?" supposed to mean? Especially when talking about a hypothetical scenario ...

Could you please make an effort to communicate clear questions?

(If you're asking for clarification, then Normal_Anomaly's explanation is what I meant)

Comment author: Arran_Stirton 22 May 2012 01:16:59PM 2 points [-]

Ah, I misread your comment, my apologies. I'll retract my question.

Comment author: Emile 22 May 2012 07:49:53AM *  2 points [-]

If everybody had to choose between investing his savings in the lottery, or in index funds, then if you look at the very rich most of them will be lottery players, even though it was the worst choice.

Comment author: Arran_Stirton 22 May 2012 09:18:04AM -1 points [-]

Can you evidence that?

Comment author: Thomas 15 May 2012 06:12:38PM *  0 points [-]

You are welcome to "bother" anytime.

I eat a lot. A half to a kilogram per day. What is the amount of information I've got this way? Very little, if any. Even drinking of a lot of alcohol - what I don't - and which would destroy my livers, would mean a very miniscule data transfer.

Some brain stimulating drugs one might take, and what brings him a significantly higher intelligence, is not a big data flow.

Just isn't. Biologists should respect what "information" and "data stream" and so on - mean.

Comment author: Arran_Stirton 18 May 2012 02:53:21AM 2 points [-]

I agree, yet none of that changes the fact that conditions in the womb have a large impact on brain development. Hence information about the conditions in the womb is required to generate a specific new-born’s brain. Sure when an adult takes a stimulating drug there's not a large data flow, but when the brain is actually forming drugs can fundamentally alter its final structure.

Comment author: Arran_Stirton 17 May 2012 08:50:01PM 0 points [-]

I feel like I should be dedicating part of my resources to reducing the likelihood of something like that ever happening.

View more: Prev | Next