Comment author: knb 02 December 2011 09:00:46PM 5 points [-]

This is a bad idea. Attempting to create personal relationships will just accelerate LW's degeneration into a typical internet hugbox. People will start supporting or opposing ideas based on whether they are "e-friends".

Comment author: Bongo 07 December 2011 08:26:11AM 1 point [-]

This could be an option.

Comment author: gwern 04 December 2011 05:56:59PM *  5 points [-]

I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?

Well, in what sort of universe would every failure of X to appear that time interval make X that much more likely? It sounds a bit vaguely like the hope function but actually sounds more like an urn of balls where you sample without replace: every ball you pull (and discard) without finding X makes you a little more confident that next time will be X. Well, what kind of universe sees its possibilities shrinking every time?

For some reason, entropy came to mind. Our universe moves from low to high entropy, and we use induction. If a universe moved the opposite direction from high to low entropy, would its minds use anti-induction? (Minds seem like they'd be possible, if odd; our minds require local lowering of entropy to operate in an environment of increasing entropy, so why not anti-minds which require local raising of entropy to operate in an environment of decreasing entropy - somewhat analogous to reversible computers expending energy to erase bits.)

I have no idea if this makes any sense. (To go back to the urn model, I was thinking of it as sort of a cellular automaton mental model where every turn the plane shrinks: if you are predicting a glider as opposed to a huge turing machine, as every turn passes and the plane shrinks, the less you would expect to see the turing machine survive and the more you would expect to see a glider show up. Or if we were messing with geometry, it'd be as if we were given a heap of polygons with thousands of sides where every second a side was removed, and predicted a triangle - as the seconds pass, we don't see any triangles, but Real Soon Now... Or to put it another way, as entropy decreases, necessarily fewer and fewer arrangements show up; particular patterns get jettisoned out as entropy shrinks, and so having observed a particular pattern, it's unlikely to sneak back in: if the whole universe freezes into one giant simple pattern, the anti-inductionist mind would be quite right to have expected all but one observations to not repeat. Unlike our universe, where there seem to be ever more arrangements as things settle into thermal noise: if a arrangement shows up we'll be seeing a lot of it around. Hence, we start with simple low entropy predictions and decreases confidence.)

Boxo suggested that anti-induction might be formalizable as the opposite of Solomonoff induction, but I couldn't see how that'd work: if it simply picks the opposite of a maximizing AIXI and minimizes its score, then it's the same thing but with an inverse utility function.

The other thing was putting a different probability distribution over programs, one that increases with length. But while you are forbidden uniform distributions over all the infinite integers, and you can have non-uniform decreasing distributions (like the speed prior or random exponentials), it's not at all obvious what a non-uniform increasing distribution looks like - apparently it doesn't work to say 'infinite-length programs have p=0.5, then infinity-1 have p=0.25, then infinity-2 have p=0.125... then programs of length 1/0 have p=0'.

Comment author: Bongo 04 December 2011 06:06:24PM *  4 points [-]

(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)

Comment author: JoshuaZ 01 December 2011 06:39:38PM *  10 points [-]

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that at a glance seem to make rough syntactic sense actually has semantics behind it. A lot of theology and the bad ends of philosophy have this problem. Even math has run into this issue. Until limits were defined rigorously in the mid 19th century there was disagreement over what the limit of 1 -1 + 1 -1 +1 -1 +1... was. Is it is 1 because one can group it as 1 + (-1 +1) + (-1+1)... or maybe it is zero since one can write it as (1-1) + (1-1) + (1-1)...? This did however lead to good math and other notions of limits including the entire area of what would later be called Tauberian theorems.

Comment author: Bongo 03 December 2011 08:03:35AM *  2 points [-]

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

This sentence is so convoluted that at first I thought it was some kind of meta joke.

Comment author: Grognor 26 November 2011 08:31:52AM 16 points [-]

Re: this image

Fucking brilliant.

Comment author: Bongo 29 November 2011 11:30:28PM 0 points [-]

It's also another far-mode picture.

Comment author: [deleted] 16 November 2011 05:51:12PM 1 point [-]

I have ten tabs open right now.

<making excuses>This is probably because of my habit to opening almost all links in a new tab (because it's easier to get back to where I came from) and can't be bothered to close a tab unless I really have too many tabs open.</making excuses>

Comment author: Bongo 22 November 2011 01:59:37AM 0 points [-]

73 tabs, 4 windows.

In response to Existential Risk
Comment author: Gedusa 15 November 2011 04:04:01PM 22 points [-]

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

In response to comment by Gedusa on Existential Risk
Comment author: Bongo 16 November 2011 12:45:46PM 3 points [-]

Also, I'd say both of those pictures seem to have the effect of inducing far mode.

Comment author: Bongo 26 October 2011 11:38:16AM 4 points [-]

Given any problem, one should look at it, and pick the course that maximising one's expectation. ... what if my utility is non-linear

You're confusing expected outcome and expected utility. Nobody thinks you should maximize the utility of the expected outcome; rather you should maximize the expected utility of the outcome.

Lets now take another example: I am on Deal or No Deal, and there are three boxes left: $100000, $25000 and $.01. The banker has just given me a deal of $20000 (no doubt to much audience booing). Should I take that? Expected gains maximisation says certainly not!

Yes, and expected gains maximization, which nobody advocates, is stupid, unlike expected utility maximization, which will take into account the fact that your utility function is probably not linear on money.

Comment author: Bongo 24 October 2011 06:23:37AM 2 points [-]

Is there a video of the full lecture?

Comment author: timtyler 18 October 2011 01:45:48PM *  5 points [-]

The paper gives what it describes as the “AGI Apocalypse Argument” - which ends with the following steps:

_12. For almost any goals that the AGI would have, if those goals are pursued in a way that would yield an overwhelmingly large impact on the world, then this would result in a catastrophe for humans.

_13. Therefore, if an AGI with almost any goals is invented, then there will be a catastrophe for humans.

_14. If humans will invent an AGI soon and if an AGI with almost any goals is invented, then there will be a catastrophe for humans, then there will be an AGI catastrophe soon.

_15. Therefore, there will be an AGI catastrophe soon.

It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.

If the stated aim was: "to convince my students that all of us are going to be killed by an artificial intelligence" - why start with such a flawed argument?

Comment author: Bongo 19 October 2011 10:34:58PM *  3 points [-]

it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes.

More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.

Comment author: Gedusa 16 October 2011 12:55:33PM 1 point [-]

Here maybe?

Comment author: Bongo 16 October 2011 09:35:33PM *  3 points [-]

It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)

View more: Prev | Next