From Hacker News.

  1. We're going to build the next Facebook!
  2. We're going to found the next Apple!
  3. Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
  4. Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
  5. This is going to be the equivalent of the invention of electricity if it works out.
  6. We're going to make an IQ-enhancing drug and produce basic change in the human condition.
  7. We're going to build serious Drexler-class molecular nanotechnology.
  8. We're going to upload a human brain into a computer.
  9. We're going to build a recursively self-improving Artificial Intelligence.
  10. We think we've figured out how to hack into the computer our universe is running on.

This made me laugh, but from the look of it, I'd say there is little work to do to make it serious. Personally, I'd try to shorten it so it is punchier and more memorable.

New Comment
61 comments, sorted by Click to highlight new comments since:

I can't find the comment of Eliezer that inspired this but:

The "If-you-found-out-that-God-existed scale of ambition".

1) "Well obviously if I found out God exists I'd become religious, go to church on Sundays etc."

2) "Actually, most religious people don't seem to really believe what their religion says. If I found out that God existed I'd have to become a fundamentalist, preaching to save as many people from hell as I could."

3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."

4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."

6) "Good. I already planned to become God if possible. Now I have an existence proof."

7) "That's strange, I don't remember creating that god... It must have grown from my high school science experiment when I wasn't looking."

3) "Just because God exists, doesn't mean that I should worship him. In fact, if Hell exists then God is really evil, and I should put all my effort into killing God and rescuing everyone from hell. Sure it sounds impossible, but I wouldn't give up until I'd thought about the problem and tried all possible courses of action."

But if you succeed in pulling everyone from hell, what would give their existences meaning and purpose? I mean, you just can't thwart god's sovereign will for his creatures without consequences. God created them for damnation as their telos from the very beginning, just as he created others to receive totally undeserved salvation.

I would rather have no purpose (originating in myself or in someone else) than have the outside-given purpose of suffering. If they cared about anything when they got out of hell, that would be their purpose though.

But I would expect them all to be insane from centuries of torture.

That was a bit of misplaced sarcasm, I assume.

I tried to imagine what a Calvinist would say.

4) "God is massively powerful. Sure I'd kill him if I had to, but that would be a catastrophic waste. My true aim would be to harness God's power and use it to do good."

In other words, you want to convert god into a Krell Machine that works properly?

That's Eliezers life mission. Preventing an UFAI and instead having an FAI.

[-]khafra280

My ambition is infinite but not limitless. I don't think I can re-arrange the small natural numbers.

Quoting Michael Vassar and myself; I think we independently thought it.

Great concept.

Also, a great example of how to singlehandedly reframe a discussion - a skill that may be a rare advantage of LWers in the social-influence sphere.

Just one suggestion: come up with a new goal to put at the top of the list, and shift the rest down. That way, "how to hack into the computer our universe is running on" would be "up to 11" on the list.

The new #1 item could be something like "We're going to make yet another novelty t-shirt store!"

Since it's basically a log scale in terms of outcomes, the T-shirt store might be a 0.

-10 would be "I will make a generic post on LW."

It would be a fun exercise to flesh out the negative side of the scale.

-15: I will specify a single item on the negative side of the scale.

[-]TimS60

-20: I will critique a potential addition to the list without adding a suggestion of my own.

[-]Emile110

That's not a very interesting item, it's too similar to the -15 one.

-25: It briefly occurs to me to think about a generic post on LW.

Nah.

11 We think we've figured out how to hack into the computer ALL the universes are running on.

[-]Shmi50

12 create your own universe tree.

13: the entire level 4 Tegmark multiverse.

14: newly discovered level 5 Tegmarkian multiverse.

15: discover ordinal hierarchy of Tegmark universes, discover method of constructing the set of all ordinals without contradiction, create level n Tegmark universe for all n

[-]Thomas150

99+ percent alive don't intend to reach even number 1. They consider it as a megalomania of a sort.

Never the less, we must do 9, regardless of almost everybody's opinion. Man got to do, what man got to do.

[-]khafra170

To be fair, if 1% of people think they can found a company that defines the way more than 10% of humans relate to each other for several years, 99.9999% of them are vastly overconfident.

Nice! I'm thinking my idea of a self-adjusting currency that uses a peer to peer proof of work algorithm which solves useful NP problems as a side effect and incorporates automated credit ratings based on debt repayment and contract fulfillment rates is probably in the 3 range. But if I hook it up to a protein folding game that teaches advanced biochemistry to akrasiatic gamers as a side effect it could be boosted up to the 6 range.

If you ignore the credit rating system, and replace its hash algorithm with variable-length (expanding) one, that's basically what Bitcoin is. (Inversion of variable-length collision-resistant hash functions is NP-hard. I had to ask on that one.)

[EDIT: That question has been dead for a while, but now that I posted a link, it got another answer which basically repeats the first answer and needlessly retreads why I had to rephrase the question in a way such that the hash inversion problem has a variable size so that asymptotic difficulty becomes meaningful, thus being non-responsive to the question as now phrased. I hope it wasn't someone from here that clicked that link.]

They've made a lot of progress getting games to derive protein-folding results, but I think there's a lot of room for improvement there (better fidelity to the laws of the protein folding environment so players can develop an "intuition" of what shapes work, semiotics that are more suggestive of the dynamics of their subsystems, etc).

I trust you've looked into Ripple? It strikes me as fairly interesting, though the implementation is, at present, uninspiring.

I've been musing about the same sort of proof-of-work algorithm, but I haven't come up with a good actual system yet - there's no obvious way to decentralizedly get a guaranteed-hard new useful problem.

Interesting! I was actually inspired by some of your IRC comments.

I am thinking the problems would be produced by peers and assigned to one another using a provably random assignment scheme. When assigned a problem, each peer has the option to ignore or attempt to solve. If they choose ignore they are assigned another one. Each time this happens to a problem is treated by the network as evidence that the problem is a hard one. If someone solves a scored-as-hard problem, they get a better chance of winning the block. (This would be accomplished by appending the solution as a nonce in a bitcoin-like arrangement and setting the minimum difficulty based on the hardness ranking.)

Hm. It never occurred to me that provable randomness might be useful... As stated, I don't think your scheme works because of Sybil attacks:

  1. I come up with some easy NP problem or one already solved offline
  2. I pass it around my 10,000 fake IRC nodes who all sign it,
  3. and present the solved solution to the network
  4. $$$

It's interesting that 2 isn't particularly easier than 9, assuming 9 is possible. The scale is in the effect, and though there are differences in difficulty, they're not the point.

2 has been done many times in human history (for some reasonably definition of what companies count as "previous Apples"). 9 has never been done. Why do you think 9 is no harder than 2, assuming it is possible?

9 has been done many times in human history too, for some reasonable definition of "create a better artificial optimizer."

Anyhow, to answer your question, I'm just guessing, based on calling "difficulty" something like marginal resources per rate of success. If you gave me 50 million dollars and said "make 2 happen," versus if you gave me 50 million dollars and said "make 9 happen," basically. Sure, someone is more likely to do 2 in the next few years than 9, ceteris paribus. But a lot more resources are on 2 (though there's a bit of a problem with this metric since 9 scales worse with resources than 2).

That's why 9 specifies "recursively self-improving", not "build a better optimizer", or even recursively improving optimizer. The computer counts for recursively improving, imho, it just needs some help, so it's not self-improving.

Presumably, if anyone ever solves 9, so did their mom.
Which is not in fact intended as a "your mom" joke, but I don't see any way around it being read that way.

If self-improving intelligence is somwehere on the hierarchy of "better optimizers," you just have to make better optimizers, and eventually you can make a self-improving optimizer. Easy peasy :P Note that this used the assumption the it's possible, and requires you to be charitable about interpreting "hierarchy of optimizers."

When I posted about the possibility of raising the sanity waterline enough to improve the comments at youtube, it actually felt wildly amibitious.

Where would achieving that much fit on the list?

[-]see20

I think, given how many millions of minds it would have to affect and how much sanity increase it would require, it sounds a lot like 6 in practice. (Unless the approach is "Build a company big enough to buy Google, and then limit comments to people who are sane", in which case, 2.)

Or you could build a Youtube competitor that draws most users away from Youtube, which is between 0.5 and 1.

[-][anonymous]30

You'll need at least two levels below 1 to make it really useful.

  1. I'm going to watch TV
  2. I'm going to have a career
  3. I'm going to start a successful company
  4. I'm going to build the next facebook ...
[-]Shmi20

Any past examples of level 6 and up?

[-]evand120

Level 6 seems like it could include both language and writing. For stuff beyond that, I think you have to look at accomplishments by non-human entities. Bacteria would seem to count for level 7, humans for 8 and possibly 9 (TBD).

Nice. A possible extension would be to have other less impressive achievements measured as decimals (We're going to incrementally improve distribution efficiency in this sector) and negative numbers for bad things...

I wonder where "We're going to modify the process of science so that it recursively self-improves for the purpose of maximizing its benefit to humanity" would be? Would it be less or more ambitious than SI's goal (even though it should accomplish SI's goal by working towards FAI)?

[-]ema00

I would put it lower than 9 because a general AI is science as software. Which means it is already contained in 9.