Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: paper-machine 30 August 2014 03:49:26PM 0 points [-]

Define "independently."

Comment author: paper-machine 30 August 2014 03:46:19PM 0 points [-]

You're being uncharitable. "[It's] likely [that X]" doesn't exclude the possibility of non-X.

If you know nothing about a probability distribution, it is more likely that it has one absolute maximum than more than one.

Comment author: army1987 30 August 2014 03:21:05PM 1 point [-]

To whoever voted for “Multi-cell life unlikely”: Multicellularity has evolved independently at least 46 times.

Comment author: army1987 30 August 2014 02:47:53PM 0 points [-]

My personal guess would be that the great filter isn't a filter at all, but a great scatterer, where different types of optimizers do not recognize each other as such, because their goals and appearances are so widely different, and they are sparse in the vast space of possibilities.

See James Miller here. Sure, the space of possible value systems is vaste, but I doubt that much less than (say) 0.1% of them would lead agents to try and take over the future light cone, so this could at most explain a small fraction (logarithmically) of the filter.

Comment author: peter_hurford 30 August 2014 02:43:56PM 0 points [-]

From the article:

The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).

That doesn't sound like it admits the possibility of twelve, independent, roughly equally balanced filters.

Comment author: army1987 30 August 2014 02:43:11PM 0 points [-]

Why? Having dabbled a bit in evolutionary simulations, I find that, once you have unicellular organisms, the emergence of cooperation between them is only a matter of time, and from there multicellulars form and cell specialization based on division of labor begins. Once you have a dedicated organism-wide communication subsystem, why would it be unlikely for a centralized command structure to evolve?

On Earth multicellularity arose independently several dozen times but AFAIK only animals have anything like a central nervous system.

Comment author: army1987 30 August 2014 02:36:58PM 0 points [-]

From the OP:

The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).

That's very non-obvious to me; I can't see why there couldn't be (say) a step with probability around 1e-12, one with probability around 1e-7, one with probability around 1e-5, one with probability around 1e-2, and a dozen ones with joint probability around 1e-1, so that no one step comprises the majority (logarithmically) of the filter.

In response to comment by [deleted] on Roles are Martial Arts for Agency
Comment author: Jurily 30 August 2014 02:14:47PM 0 points [-]

I was only talking about the timeframe right after kicking down the door, when you really can't afford any delays in decision making, but there is only a very limited set of options you need to choose from. It's the training that gives you the options and the means to choose, you don't think them up on the spot.

In particular, they don't just stand there and think about the ethics of yelling at possibly innocent people while they may still be armed.

Similarly, in the car crash, there is no separate Proper and Fast reaction, because if the Fast one is not Proper, you're dead, end of story. You either make the split-second decision that saves your life or you don't, and whatever you think about afterwards is the result of already having made the correct choice.

By the time you get into a situation, you should already have a decent working model of your car's controls and capabilities in your current speed, the road, your surroundings etc. (a.k.a a driver's licence) so the fast path can reduce the problem into a small set of valid options and their exact execution instantly. What I'm willing to accept as Proper in this context is the act of learning to drive to the point where you don't have to signal the other drivers that you're still learning.

It seems pointless to me to ponder on proper ratios, your brain will abandon the illusion of conscious control when it deems necessary. There is no amount of conscious thought that will allow you to ponder on the many-worlds interpretation in freefall. I'm not quite sure what you mean by your heartbeat running in Execute mode, can you control it at will?

Comment author: ChristianKl 30 August 2014 02:10:16PM *  0 points [-]

Pareto principle.

There also the Fermi formula.

Comment author: Sophronius 30 August 2014 01:18:50PM 2 points [-]

Maybe not explicitly, but I keep seeing people refer to "the great filter" as if it was a single thing. But maybe you're right and I'm reading too much into this.

Comment author: paper-machine 30 August 2014 12:41:57PM 1 point [-]

I don't think anyone really assumes that.

Comment author: Sophronius 30 August 2014 11:57:07AM 3 points [-]

Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it's most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single 'great filter' and I don't get why.

Comment author: [deleted] 30 August 2014 11:39:29AM -1 points [-]

Ran a phone call just now with my uncle who is a SWAT officer (well, localized equivalent of a SWAT officer). He says they're trained to run in two modes - decision-making and execution - and to switch the two routinely during any particular live-action scenario. He added, quote (paraphrased slightly because of translation), "those that run all the time in execute mode during live-action are morons that get people killed".

In car-crash scenario, reacting Fast only buys you time to react Properly, but indeed, it's what is needed. So, i'm here really moving the problem scope to other questions like: - Where is a fine line between Executor and Director? - Should Executor and Director run in paralel or series? - What is the optimal ratio of either in particular situation?

There's no realistic situation where one should completely overtake the other. No, not even my own heartbeat is allowed to run in Execute mode all the time.

As a side note, the newest Robocop franchise installment kind of ventures to some depth (questionably) in this exact topic, and concludes that it's best to run on two modes at the same time, Decision-Maker being a slow tweaker of the Executor, detached for the most part. An overseer with only slight control.

In response to comment by [deleted] on Roles are Martial Arts for Agency
Comment author: Jurily 30 August 2014 11:14:17AM *  0 points [-]

Thinking about what to do is an action in itself. If you pause to think whether to brake or steer left to avoid a crash, you're not doing either. If a SWAT officer pauses to think during the part of a raid when the most important decisions happen, people get shot.

Most optimal algorithms do not involve questioning their own validity. There are times when you design and optimize, and there are times when you execute. Downtime is only useful when you're not up.

In response to comment by billswift on GAZP vs. GLUT
Comment author: emhs 30 August 2014 10:42:59AM 0 points [-]

The lookup algorithms in question are not processing the meaning of the inputs and generating a response as needed. The lookup algorithms simply string-match the conversational history to the list of inputs and output the next line in the conversation.

An algorithmic reasoning system, on the other hand, would seem to be something that actually reasons about the meaning of what's been said, in the sense of logically processing the input as opposed to string-matching it.

Comment author: [deleted] 30 August 2014 08:41:53AM 0 points [-]

There is a particular danger in "grokking" quick-response and that is - it's extremely hard to self-evaluate that you're doing it wrong, and it takes a lot of time to unlearn a particular. I attest to this as a professional musician and advanced modal interfaces afficionado.

Also, I'm doubting the optimality of "eliminating your slow, conscious deliberation" in any non-synthetic scenario (like most Martial Arts contests and most Martial Arts in general are). There's a reason Law Enoforcers do not act as Martial Artists, and draw a fine line between deliberating consciously and acting on their motoric training, and run a scenario through a specific rule set.

I'd rather see this article using Law Enforcement instead of Martial Arts as an analogy. It would plant it in the grounds of reality more thoroughly.

Comment author: Gunnar_Zarncke 30 August 2014 08:01:45AM *  0 points [-]

I took the list from wikipedia (except for added differentiation after now).

Comment author: Stuart_Armstrong 30 August 2014 07:47:39AM 0 points [-]

What about central nervous systems?

Comment author: paulfchristiano 30 August 2014 04:06:46AM 0 points [-]

This point seems like an argument as an argument in favor of the relevance of the problem laid out in this post. I have other complaints with this framing of the problem, which I expect you would share.

The key distinction between this and contemporary AI is not self-modification, but wanting to have the kind of agent which can look at itself and say, "I know that as new evidence comes in I will change my beliefs. Fortunately, it looks like I'm going to make better decisions as a result" or perhaps even more optimistically "But it looks like I'm not changing them in quite the right way, and I should make this slight change."

The usual route is to build agents which don't reason about their own evolution over time. But for sufficiently sophisticated agents, I would expect them to have some understanding of how they will behave in the future, and to e.g. pursue more information based on the explicit belief that by acquiring that information they will enable themselves to make better decisions. This seems like it is a more robust approach to getting the "right" behavior than having an agent which e.g. takes "Information is good" as a brute fact or has a rule for action that bakes in an ad hoc approach to estimating VOI. I think we can all agree that it would not be good to build an AI which calculated the right thing to do, and then did that with probability 99% and took a random action with probability 1%.

That said, even if you are a very sophisticated reasoner, having in hand some heuristics about VOI is likely to be helpful, and if you think that those heuristics are effective you may continue to use them. I just hope that you are using them because you believe they work (e.g. because of empirical observations of them working, the belief that you were intelligently designed to make good decisions, or whatever), not because they are built into your nature.

Comment author: Azathoth123 30 August 2014 03:34:13AM 2 points [-]

If the Machiavellian Intelligence Hypothesis is the correct explanation for the runaway explosion of human intellect - that we got smarter in order to outcompete each other for status, not in order to survive - then solitary species like the octopus would simply never experience the selection pressure needed to push them up to human level.

Octopuses also have the feature that they die after mating (it's unclear why this evolved). This makes it impossible for them to develop a culture that they can pass on to their children.

View more: Next