Comment author: wedrifid 19 June 2012 08:22:09PM *  3 points [-]

I commit to donating $20k to the organisation if they adopt this name! Or $20k worth of labor, whatever they prefer. Actually, make that $70k.

Comment author: Zetetic 19 June 2012 09:50:34PM 18 points [-]

You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.

Comment author: wedrifid 19 June 2012 06:26:37PM *  13 points [-]
  • Center for Helpful Artificial Optimizer Safety (CHAOS)
  • Center for Slightly Less Probable Extinction
  • Freindly Optimisation Of the Multiverse (FOOM)
  • Yudkowsky's Army
  • The Center for World Domination
  • Pinky and The Brain Institute
  • Cyberdyne Systems
Comment author: Zetetic 19 June 2012 08:21:07PM *  7 points [-]

Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods

Comment author: Viliam_Bur 14 June 2012 11:16:55AM 0 points [-]

Maybe this could be refined some with some sort of K-complexity consideration, but I can't think of any obvious way to do that (that actually leads to a concrete calculation anyway).

It certainly needs to be refined, because if I live in thousand universes and Bob in one, I would be decreasing my utility in thousand universes in exchange for additional utility in one.

I can't make an exact calculation, but it seems obvious to me that my existence has much greater prior probability than Bob's, because Bob's definition contains my definition -- I only care about those Bobs who analyze my algorithm, and create me if I create them. I would guess, though I cannot prove it formally, that compared to my existence, his existence is epsilon, therefore I should ignore him.

(If this helps you, imagine a hypothetical Anti-Bob that will create you if you don't create Bob; or he will create you and torture you for eternity if you create Bob. If we treat Bob seriously, we should treat Anti-Bob seriously too. Although, honestly, this Anti-Bob is even less probable than Bob.)

Comment author: Zetetic 15 June 2012 09:39:34PM *  0 points [-]

Bob's definition contains my definition

Well here's what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob's complete definition, then It isn't any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that make the deal favorable? Limiting to these sub-classes, is a world that contains your definition more likely than one that contains a favorable Bob-agent? I'm not sure.

So the root of the issue that I see is this: Your definition is already totally fixed, and if you completely specify Bob, the converse of your statement holds, and the worlds seem to have roughly equal K-complexity. Otherwise, Bob's definition potentially includes quite a bit of stuff - especially if the only parameters are that Bob is an arbitrary agent that fits the stipulated conditions. The less complete your definition of Bob is, the more general your decision becomes, the more complete your definition of Bob is, the more the complexity balances out.

EDIT: Also, we could extend the problem some more if we consider Bob as an agent that will take into account an anti-You that will create Bob and torture it for all eternity if Bob creates you. If we adjust to that new set of circumstances, the issue I'm raising still seems to hold.

Comment author: Zetetic 13 June 2012 09:25:26PM 4 points [-]

I'm not sure I completely understand this, so Instead of trying to think about this directly I'm going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:

Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn't there already and A would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B's deal?

Let Y be the daily expected utility without B. Then Y - X = EU post-B. The utility to agent A in a non-B-containing world is

Where d(i) is a time dependent discount factor (possibly equal to 1) and t is the lifespan of the agent in days. Obviously, if the agent should not have pre-committed (and if X is negative or 0 the agent should/might-as-well pre-commit, but then B would not be a jerk).

Otherwise, pre-commitment seems to depend on multiple factors. A wants to maximize its sum utility over possible worlds, but I'm not clear on how this calculation would actually be made.

Just off the top of my head, if A pre-commits, every world in which A exists and B does not, but A has the ability to generate B will drop from a daily utility of Y, to one of Y - X. On the other hand, every world in which B exists but A does not, but B can create A goes from 0 to Y -X utility. Let's assume a finite and equal number of both sorts of worlds for simplicity. Then pairing up each type of world, we go from an average daily utility Y/2 to Y-X. So we would probably at least want it to be the case that: so

So then the tentative answer would be "it depends on how much of a jerk Bob really is". The rule of thumb from this would indicate that you should only pre-commit if Bob reduces your daily expected utility by less than half. This was under the assumption that we could just "average out" the worlds where the roles are reversed. Maybe this could be refined some with some sort of K-complexity consideration, but I can't think of any obvious way to do that (that actually leads to a concrete calculation anyway).

Also, this isn't quite like the Prometheus situation, since Bob is not always your creator. Presumably you're in a world where Bob doesn't exist, otherwise you wouldn't have any obligation to use the Bob-maker Omega dropped off even if you did pre-commit. So I don't think the same reasoning applies here.

An essential part of who Bob the Jerk is is that he was created by you, with some help from Omega. He can't exist in a universe where you don't, so the hypothetical bargain he offered you isn't logically coherent.

I don't see how this can hold. Since we're reasoning over all possible computable universes in UDT, if Bob can be partially simulated by your brain, a more fleshed out version (fitting the stipulated parameters) should exist in some possible worlds

Alright, well that's what I've thought of so far.

Comment author: lukeprog 07 June 2012 12:04:17AM 3 points [-]

SPARC for undergrads is in planning, if we can raise the funding.

What skills/specialized knowledge could SI use more of?

See here.

Comment author: Zetetic 07 June 2012 02:34:48AM 1 point [-]

SPARC for undergrads is in planning, if we can raise the funding.

Awesome, glad to hear it!

See here.

Alright, I think I'll sign up for that.

Comment author: Zetetic 02 June 2012 11:49:23PM 9 points [-]

Anything for undergrads? It might be feasible to do a camp at the undergraduate level. Long term, doing an REU style program might be worth considering. NSF grants are available to non-profits and it may be worth at least looking into how SIAI might get a program funded. This would likely require some research, someone who is knowledgeable about grant writing and possibly some academic contacts. Other than that I'm not sure.

In addition, it might be beneficial to identify skill sets that are likely to be useful for SI research for the benefit of those who might be interested. What skills/specialized knowledge could SI use more of?

Comment author: Zetetic 02 June 2012 08:14:58PM *  0 points [-]

My bigger worry is more along the lines of "What if I am useless to the society in which I find myself and have no means to make myself useful?" Not a problem in a society that will retrofit you with the appropriate augmentations/upload you etc. and I tend to think that is more likely that not, but what if, say, the Alcore trust gets us through a half-century-long freeze and we are revived, but things have moved more slowly than one might hope, yet fast enough to make any skill sets I have obsolete? Well, if the expected utility of living is sufficiently negative I could kill myself and it would be as if I hadn't signed up for cryonics in the first place, so we can chalk that up as a (roughly) zero utility situation. So in order to really be an issue, I would have to be in a scenario where I am not allowed to kill myself or be re-frozen etc. Now, if I am not allowed to kill myself in a net negative utility situation (I Have no Mouth and I Must Scream) that is a worst case scenario, and seems exceedingly unlikely (though I'm not sure how you can get decent bounds for that).

So my quick calculation would be something like: P("expected utility of living is sufficiently negative upon waking up")*P("I can't kill myself" | "expected utility of living is sufficiently negative upon waking up") = P("cryonics is not worth it" | "cryonics is successful")

It's difficult to justify not signing up for cryonics if you accept that it is likely to work in an acceptable form (this is a separate calculation). AFAICT there are many more foreseeable net positive or (roughly) zero utility outcomes than foreseeable net negative utility outcomes.

Comment author: Alicorn 30 May 2012 02:51:57AM 28 points [-]

I approve of the general goals behind this post. Affection is great! That said, it sounds kind of like it was written on ecstasy. And I'm not sure the exact approach will work generally. #3 in particular is a little badly worded - how far over one's limits is one expected to tolerate encroachment? How many times?

I think it makes sense to consider what we want to use from ask culture versus guess culture here. If I like and want to hug everyone at a gathering except one person, and that one person asks for a hug after I've hugged all the other people and deliberately not hugged them, that's gonna be awkward no matter what norms we have unless I have a reason like "you have sprouted venomous spines". But if someone I'm perfectly comfortable with longs longingly to pet my long hair, and doesn't ask, this is indeed a sadly missed gain. Because my hair is awesome.

Comment author: Zetetic 31 May 2012 03:34:25AM 1 point [-]

If I like and want to hug everyone at a gathering except one person, and that one person asks for a hug after I've hugged all the other people and deliberately not hugged them, that's gonna be awkward no matter what norms we have unless I have a reason like "you have sprouted venomous spines".

Out of curiosity, are there any particular behaviors you have encountered at a gathering (or worry you may encounter) that you find off-putting enough to make the hug an issue?

Comment author: thomblake 15 May 2012 04:59:04PM 1 point [-]

Aha - a relevant discussion was had on the list about a year ago, hereabouts.

We really ought to have a subreddit if people really want to talk about sl4/fai topics here. A different site on the same engine would be even better.

Comment author: Zetetic 15 May 2012 10:48:07PM 0 points [-]

I'm 100% for this. If there were such a site I would probably permanently relocate there.

Comment author: [deleted] 13 April 2012 11:48:23AM *  10 points [-]

I believe that the universe exists tautologically as a mathematical entity and that from the complete mathamatical description of the universe every physical law can be derived, essentially erasing the distiction of map and territory. Roughly akin to the Tegmark 4 hypohtesis, and I have some very intuitively obvious arguments for this which I will post as a toplevel article at one point. Virtual certanity (99.9%).

In response to comment by [deleted] on The Irrationality Game
Comment author: Zetetic 17 April 2012 12:39:44AM *  2 points [-]

essentially erasing the distiction of map and territory

This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.

In more detail:

Firstly, even if you take some sort of Platonic view where we have access to all the math, you still have to properly calibrate your map to figure out what part of the territory you're in. In this case you could think of calibrating your map as applying an appropriate automorphism, so the map/territory distinction is not dissolved.

Second, the first view is wrong, because human brains do not contain or have access to anything approaching a complete mathematical description of the level 4 multiverse. At best a brain will contain a mapping of a very small part of the territory in pretty good detail, and also a relatively vague mapping that is much broader. Brains are not logically omniscient; even given a complete mathematical description of the universe, the derivations are not all going to be accessible to us.

So the map territory distinction is not dissolved, and in particular you don't somehow overcome the mind projection fallacy, which is a practical (rather than philosophical) issue that cannot be explained away by adopting a shiny new ontological perspective.

View more: Prev | Next