Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: VoiceOfRa 06 October 2015 01:05:51AM -1 points [-]

Even if so, you still have the locate-the-relevant-bit problem.

What part of "universe taken over by AGI" is causing your reading comprehension to fail?

It's also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.

You haven't played with cellular automata much, have you?

Could you specify how to tell, using a human brain, whether something is an agent?

Ask it.

Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay.

The cost of specifying a language is the cost of specifying the entity that can decode it, and we've already established that a universe spanning AGI has low Kolmogorov complexity.

Comment author: gjm 06 October 2015 01:38:12AM 1 point [-]

What part of "universe taken over by AGI" is causing your reading comprehension to fail?

No part. I already explained why I don't think "universe taken over by AGI" implies "no need for lots of bits to locate what we need within the universe"; I really shouldn't have to do so again two comments downthread.

You haven't played with cellular automata much, have you?

Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you're arguing. I paraphrase thus: "A large instance of Conway's Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity." Is it not obvious that you're proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.

I've already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running "Life" on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it's 2.30am local time so I'll leave you to look for them, if you choose to do so.

The cost of specifying a language is the cost of specifying the entity that can decode it

No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.

Let's make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of "agent" has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like "Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks", which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?

I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn't need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say "and then a miracle happens".

Comment author: Lumifer 05 October 2015 04:41:18PM *  1 point [-]

there's a very real chance that they will do that

Define "very real". I don't think it's a serious threat -- in such situations a stern talking-to from a doctor is usually more than sufficient. To stick to one's guns in the face of opposition from the mainstream and the authority figures (like doctors) requires considerably more arrogance and intestinal fortitude than most people have. Fanatics, thankfully, are rare.

Comment author: gjm 05 October 2015 06:18:12PM 0 points [-]

I hope you're right.

Comment author: Lumifer 05 October 2015 03:02:42PM 2 points [-]

should I do anything about it?

Are there any practical consequences of these beliefs? As long as they are not telling cancer patients to skip the therapy and think happy, I don't see any harm. Trying to fix other people's beliefs just because you don't like them seems to be... not a terribly productive thing to do.

that also means that I'll be standing idly by and allowing bullshit to propagate

Have you looked at a TV screen recently..?

Comment author: gjm 05 October 2015 04:32:53PM 0 points [-]

As long as they are not telling cancer patients to skip the therapy

If they really believe that that's the best thing for cancer patients to do, then there's a very real chance that they will do that (or, if the cancer is their own, just skip the therapy themselves). There may be value in trying to improve their thinking in advance, because once they or someone close to them actually has cancer it may be too late. (Because people don't usually make radical changes in their thinking quickly.)

Whether that outweighs the other factors here, I don't know. Especially given how reluctantly people change their minds.

Comment author: Lumifer 05 October 2015 03:41:44PM 1 point [-]

A new (for me) word: mathiness.

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Comment author: gjm 05 October 2015 04:29:02PM -1 points [-]

It's maybe worth saying that the term is clearly based on "truthiness".

Comment author: Transfuturist 05 October 2015 03:05:30PM -1 points [-]

Krusty's Komplexity Kalkulator!

Comment author: gjm 05 October 2015 04:26:17PM 2 points [-]

Kolmogorov's, which is of course the actual reason for my initial "k"s.

Comment author: CCC 05 October 2015 09:31:29AM *  0 points [-]

Does that mean the bible which assumes that God wiped out most of humanity with the flood is definitely wrong


a) The existence of an afterlife would mean that those people were not destroyed. They had a really bad day and then woke up someplace else.

b) The story of the flood, in itself, may be a parable (by which I mean, a story intended to teach a lesson, usually of a moral or ethical nature, without necessarily being true) like the parable of the Good Samaritan, or the story of the Garden of Eden.

c) There may have been reason for the flood.

Any one of these alternatives could answer your question; personally, I think (b) is the most likely, though (a) and (c) are also possible.

Comment author: gjm 05 October 2015 01:39:14PM -1 points [-]

What exactly do you mean by option (b)?

  • That whoever originally wrote that story intended it to be understood as fiction with a moral, rather than as truth?
  • That it may have originated as (alleged) history, but whoever incorporated it into the documents that became Jewish and Christian scriptures did so with the intention that it should be understood as fiction with a moral?
  • That whoever wrote it may have intended it to be seriously believed, but God arranged for it to land up in the Jewish and Christian scriptures with the intention that it should be treated as fiction with a moral?
  • That it doesn't really matter why it was written or how it got into the scriptures, but nowadays it should be understood as fiction?
  • Something else?

It seems to me that the first three of these imply a certain degree of incompetence on the part of the writers, editors, or god concerned, given how widely the story has been treated as history since its incorporation into scripture.

The fourth is fair enough, but it seems to me that (what I take to be) ChristianKI's inference "the bible contains this story, which is not true, so we should reduce our general confidence in what the bible says" is then reasonable (and indeed the decision to understand as fiction something in the bible that wasn't originally intended that way amounts to conceding that point).

Of course if the fifth option is right then all of the above may be moot.

Comment author: CCC 05 October 2015 09:25:59AM 0 points [-]

...what does the bacterial flagellum have to do with anything? I think I am missing some important context here.

But the problem is giving good enough reasons for accepting that in a particular case. "It looks like it couldn't have evolved," or "It looks like it didn't have human sources" are not good enough.

Well, the simplest argument for accepting some revelations would be that later events, unknown and unknowable at the time of the revelation, were later shown to be true (for example, predicting the time and place of a volcanic eruption or other natural disaster)

Comment author: gjm 05 October 2015 01:32:33PM 0 points [-]

"The" bacterial flagellum (actually there are different kinds and I think only one kind is relevant here) was a leading example used by proponents of "intelligent design", who claimed it was a complex system that couldn't possibly have evolved incrementally.

Comment author: CCC 05 October 2015 09:38:50AM 1 point [-]

probably the closest thing to what you are looking for is Raising the Sanity Waterline which lists the ideas that ought to make discarding religions into one of the low-hanging fruits of any attempt at upgrading one's rationality.

The thing is, if it really was such a low-hanging fruit, then it would seem likely that the most successful scientists would have done so already (there's a lot in rationality which makes it good at science). Since the same article points out the existence of Nobel laureates who are religious in one or other way, I think it is not nearly as obvious a matter as the article suggests...

Comment author: gjm 05 October 2015 01:30:41PM -1 points [-]

Religious belief is apparently much less common

  • among scientists than in the general population
  • among very successful scientists than among scientists generally

especially if one defines "religious belief" in a way that makes it have actual consequences for the observable world (e.g., a god who actually affects what happens in the world rather than just winding it up and then leaving it alone).

See e.g. this summary of the results of asking scientists about their beliefs and the letter to Nature that the summary is mostly about. (Note: there's some scope for debate about the interpretation of these results, though I find the arguments at the far end of that link extremely unconvincing.)

Comment author: Jacobian 03 October 2015 04:41:13PM *  4 points [-]

Spent about 20 minutes playing online, I have some technical notes and general impressions.

Technical (skip this if you're not Jimrandomh):

  • The timer feels way too long, especially as people get to know the cards better and don't have to read all of them.
  • When choosing card pairs they are displayed in long rows, so for 3 people someone's first and second cards are on different rows. That's very unintuitive. Maybe put the pairs in separated columns?
  • When judging, seeing the timing of the cards coming out can skew the judgement, and also makes it easy to guess which card is the control.
  • The website works smoothly, well done!

Here are my main takeaways:

  • The cards are excellent, a lot of them are either very funny or are doing a good job explaining things quickly. For some, it's hard to tell which :)
  • Unfortunately, the jokes that happen during play itself aren't funny at all compared to the cards. A lot of times there isn't a single card that will give a "funny" answer, am I supposed to choose the logically appropriate one instead, then? I wonder if I'd be more likely to buy the best cards as a poster than as a card game.

I'm going to try and invite some non-LW friends to play, see if they like it or run away screaming in confusion.

Comment author: gjm 05 October 2015 01:09:34PM 0 points [-]

I've played a couple of games. I basically agree with all of that. On the last point: It felt like I had a genuinely funny card to play maybe 20% of the time (maybe less), and a (at-least-semi-)seriously appropriate one maybe 30% of the time or so.

Comment author: TheAncientGeek 05 October 2015 09:54:38AM 2 points [-]

I'm very pessimistic about the prospects for defining "good" in abstract game-theoretic terms with enough precision to carry out any project like this. You'd need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?

So it would be difficult for a fintie being that is figuring out some facts that it doesn't already know on the basis of other facts that it does know. Now..how about an omniscient being?

Comment author: gjm 05 October 2015 01:03:19PM -1 points [-]

I think you may be misunderstanding what the relevance of the "difficulty" is here.

The context is the following question:

  • If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like "good"?

If some notion like "perfectly benevolent being of unlimited power" turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.

(Of course that isn't the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it's the complexity we're looking at.)

In answering this question, it's completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as "agents" and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we're interested in. The atheistic argument here isn't "It's unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god" (to which your question would be an entirely appropriate rejoinder). It's something quite different: "It's not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that's a very complex hypothesis, because the notions of 'agent' and 'preferences' don't correspond to simple computer programs".

(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like 'agent' and 'preference' are much harder to write programs for than physics-level ones like 'electron'. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle -- if execution time and computer memory are no object -- we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)

View more: Next