Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: IlyaShpitser 28 November 2017 01:21:04PM *  2 points [-]

Hi, Hume's constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).

Comment author: JenniferRM 28 November 2017 11:30:08PM *  2 points [-]

My read on the state of public academic philosophy is that there are many specific and potentially-but-not-obviously-related issues that come up in the general topic of "foundations of inference". There are many angles of attack, and many researchers over the years. Many of them are no longer based out of official academic "philosophy departments" anymore and this is not necessarily a tragedy ;-)

The general issue is "why does 'thinking' seem to work at all ever?" This can be expressed in terms of logic, or probabilistic reasoning, or sorting, or compression, or computability, or theorem decidability, or P vs NP, or oracles of various kinds, or the possibility of language acquisition, and/or why (or why not) running basic plug-and-chug statistical procedures during data processing seems to (maybe) work in the "social sciences".

Arguably, these all share a conceptual unity, and might eventually be formally unified by a single overarching theory that they are all specialized versions of.

From existing work we know that lossless compression algorithms have actual uses in real life, and it certainly seems as though mathematicians make real progress over time, up to and including Chaitin himself!

However when people try to build up "first principles explanations" how how "good thinking" works at all, they often derive generalized impossibility when we scope over naive formulations of "all possible theories" or "all possible inputs".

So in most cases we almost certainly experience a "lucky fit" of some kind between various clearly productive thinking approaches and various practical restrictions on the kinds of input these approaches typically face.

Generative adversarial techniques in machine learning, and MIRI's own Garrabrant Inductor are probably relevant here because they start to spell out formal models where a reasoning process of some measurable strength is pitted against inputs produced by a process that is somewhat hostile but clearly weaker.

Hume functions in my mind as a sort of memetic LUCA for this vast field of research, which is fundamentally motivated by the core idea that thinking correctly about raw noise is formally impossible, and yet we seem to be pretty decent at some kinds of thinking, and so there must be some kind of fit between various methods of thinking and the things that these thinking techniques seem to work on.

Also thanks! The Neyman-Pearson lemma has come up for me in practical professional situations before, but I'd never pushed deeper into recognizing Jerzy Neyman as yet another player in this game :-)

Comment author: curi 28 November 2017 05:45:03PM 0 points [-]

You are requesting I write new material for you because you dislike my links to websites with thousands of free essays, because you find them too commercial, and you don't want to read books. Why should I do this for you? Do you think you have any value to offer me, and if so what?

Comment author: JenniferRM 28 November 2017 10:39:11PM *  6 points [-]

Fundamentally, the thing I offer you is respect, the more effective pursuit of truth, and a chance to help our species not go extinct, all of which I imagine you want (or think you want) because out of all the places on the Internet you are here.

If I'm wrong and you do NOT want respect, truth, and a slightly increased chance of long term survival, please let me know!

One of my real puzzles here is that I find it hard to impute a coherent, effective, transparent, and egosyntonic set of goals to you here and now.

Personally, I'd be selfishly just as happy if, instead of writing all new material, you just stopped posting and commenting here, and stopped sending "public letters" to MIRI (an organization I've donated to because I think they have limited resources and are doing good work).

I don't dislike books in general. I don't dislike commercialism in general. I dislike your drama, and your shallow citation filled posts showing up in this particular venue.

Basically I think you are sort of polluting this space with low quality communication acts, and that is probably my central beef with you here and now. There's lots of ways to fix this... you writing better stuff... you writing less stuff that is full of abstractions that ground themselves only in links to your own vanity website or specific (probably low value) books... you just leaving... etc...

If you want to then you can rewrite all new material that is actually relevant and good, to accomplish your own goals more effectively, but I probably won't read it if it is not in one of the few streams of push media I allow into my reading queue (like this website).

At this point it seems your primary claim (about having a useful research angle involving problems of induction) is off the table. I think in a conversation about that I would be teaching and you'd be learning, and I don't have much more time to teach you things about induction over and beyond the keywords and links to reputable third parties that I've already provided in this interaction, in an act of good faith.

More abstractly, I altruistically hope for you to feel a sense of realization at the fact that your behavior strongly overlaps with that of a spammer (or perhaps a narcissist or perhaps any of several less savory types of people) rather than an honest interlocutor.

After realizing this, you could stop linking to your personal website, and you could stop being beset on all sides by troubling criticisms, and you could begin to write about object level concerns and thereby start having better conversations here.

If you can learn how to have a good dialogue rather than behaving like a confused link farm spammer over and over again (apparently "a million times" so far) that might be good for you?

(If I learned that I was acting in a manner that caused people to confuse me with an anti-social link farm spammer, I'd want people to let me know. Hearing people honestly attribute this motive to me would cause me worry about my ego structure, and its possible defects, and I think I'd be grateful for people's honest corrective input here if it wasn't explained in an insulting tone.)

You could start to learn things and maybe teach things, in a friendly and mutually rewarding search for answers to various personally urgent questions. Not as part of some crazy status thing nor as a desperate hunt for customers for a "philosophic consulting" business...

If you become less confused over time, then a few months or years from now (assuming that neither DeepMind nor OpenAI have a world destroying industrial accident in the meantime) you could pitch in on the pro-social world saving stuff.

Presumably the world is a place that you live, and presumably you believe you can make a positive contribution to general project of make sure everyone in the world is NOT eventually ground up as fuel paste for robots? (Otherwise why even be here?)

And if you don't want to buy awesomely cheap altruism points, and you don't want friends, and you don't want the respect of me or anyone here, and you don't think we have anything to teach you, and you don't want to actually help us learn anything in ways that are consistent with our relatively optimized research workflows, then go away!

If that's the real situation, then by going away you'll get more of what you want and so will we :-)

If all you want is (for example) eyeballs for your website, then go buy some. They're pretty cheap. Often less than a dollar!

Have you considered the possibility that your efforts are better spent buying eyeballs rather using low grade philosophical trolling to trick people into following links to your vanity website?

Presumably you can look at the logs of your web pages. That data is available to you. How many new unique viewers have you gotten since you started seriously trolling here, and how many hours have you spent on this outreach effort? Is this really a good use of your hours?

What do you actually want, and why, and how do you imagine that spamming LW with drama and links to your vanity website will get you what you want?

Comment author: Stuart_Armstrong 27 November 2017 11:43:14AM 1 point [-]

I'm sorry, I have trouble following long posts like that. Would you mind presenting your main points in smaller, shorter posts? I think it would also make debate/conversation easier.

Comment author: JenniferRM 28 November 2017 07:20:05AM 1 point [-]

I'll try to organize the basic thought more cleanly, and will comment here again with a link to the better version when it is ready :-)

Comment author: curi 25 November 2017 11:25:44PM *  0 points [-]

My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document,

there already exist documents of a variety of lengths, both collections and single. you're coming into the middle of a discussion and seemingly haven't read much of it and haven't asked for specifically what you want. and then, with almost no knowledge of my intellectual history, accomplishments, works, etc, things-already-tried, etc, you try to give me standard advice that i've heard a million times before. that would be ok as a starting point if it were only the starting point, but i fear it's going to more or less be the ending point too.

it sounds like you want me to rewrite material from DD and KP's books? http://fallibleideas.com/books#deutsch Why would me rewriting the same things get a different outcome than the existing literature? what is the purpose?

and how do you expect me to write a one-size-fits-all document when LW has no canonical positions written out – everyone just has their own different ideas?

and why are zero people at LW familiar enough to answer well known literature in their field. fine if you aren't an expert, but why does this community seem to have no experts who can speak to these issues without first requesting summary documents of the books they don't want to read?

what knowledge do you have? what are you looking for in talking with me? what values are you seeking and offering?

(thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front

dishonesty is counter-productive and self-destructive. if you wish to change my mind about this, you'll have to address Objectivism and a few other things.

and then after 10 chapters like this it will turn out that you're a genius and everyone else was wrong and by teaching people to think good you'll have saved the world.

i've made things multiple times. here's one:

http://fallibleideas.com

there are difficulties such as people not wanting to think, learn, or truth-seek – especially when some of their biases are challenged. it's hard to tell people about ideas this different than what they're used to.

one basically can't teach people who don't want to learn something. creating more material won't change that. there are hard problems here. you could learn philosophy and help, or learn philosophy and disagree (which would be helpful), or opt out of addressing issues that require a lot of knowledge and then try to do a half-understood version of one of the more popular/prestigious (rather than correct) philosophies. but you can't get away from philosophical issues – like how to think – being a part of your life. nevertheless most people try to and philosophy is a very neglected field. such is the world; that isn't an argument that any particular idea is false.

Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I'm just too ignorant to parse it.

supposing hypothetically that that's the case: then what next?

Comment author: JenniferRM 28 November 2017 06:57:28AM *  7 points [-]

I think there are two big facts here.

ONE: You're posting over and over again with lots of links to your websites, which are places you offer consulting services, and so it kinda seems like you're maybe just a weirdly inefficient spammer for bespoke nerd consulting.

This makes almost everything you post here seem like it might all just be an excuse for you to make dramatic noise in the hopes of the noise leading somehow to getting eyeballs on your website, and then, I don't even know... consulting gigs or something?

This interpretation would seem less salient if you were trying to add value here in some sort of pro-social way, but you don't seem to be doing that so... so basically everything you write here I take with a giant grain of salt.

My hope is that you are just missing some basic insight, and once you learn why you seem to be half-malicious you will stop defecting in the communication game and become valuable :-)

TWO: From what you write here at an object level, you don't even seem to have a clear and succinct understanding of any of the things that have been called a "problem of induction" over the years, which is your major beef, from what I can see.

You've mentioned Popper... but not Hume, or Nelson? You've never mentioned "grue" or "bleen" that I've seen, so I'm assuming it is the Humean critique of induction that you're trying to gesture towards rather than the much more interesting arguments of Nelson...

But from a software engineering perspective Hume's argument against induction is about as much barrier to me being able to think clearly or build smart software as Zeno's paradox is a barrier to me being able to walk around on my feet or fix a bicycle.

Also, it looks like you haven't mentioned David Wolpert and his work in the area of no free lunch theorems. Nor have you brought up any of the machine vision results or word vector results that are plausibly relevant to these issues. My hypothesis here is that you just don't know about these things.

(Also, notice that I'm giving links to sites that are not my own? This is part of how the LW community can see that I'm not a self-promoting spammer.)

Basically, I don't really care about reading the original writings of Karl Popper right now. I think he was cool, but the only use I would expect to get from him right now would be to read him backwards in order to more deeply appreciate how dumb people used to be back when his content was perhaps a useful antidote to widespread misunderstandings of how to think clearly.

Let me spell this out very simply to address rather directly your question of communication pragmatics...

It sounds like you want me to rewrite material from DD and KP's books? Why would me rewriting the same things get a different outcome than the existing literature?

The key difference is that Karl Popper is not spamming this forum. His texts are somewhere else, not bothering us at all. Maybe they are relevant. My personal assessment is currently that they have relatively little import to active and urgent research issues.

If you displayed the ability to summarize thinkers that maybe not everyone has read, and explain that thinker's relevance to the community's topics of interests, that would be pro-social and helpful.

The longer the second fact (where you seem to not know what you're talking about or care about the valuable time of your readers) remains true, the more the first fact (that you seem to be an inefficient shit-stirring spammer) becomes glaring in its residual but enduring salience.

Please, surprise me! Please say something useful that does not involve a link to the sites you seem to be trying to push traffic towards.

you try to give me standard advice that i've heard a million times before

I really hope this was hyperbole on your part. Otherwise it seems I should set my base rates for this conversation being worth anything to 1 in a million, and then adjust from there...

Comment author: JenniferRM 25 November 2017 08:48:46PM *  3 points [-]

I hunted around your website until I found an actual summary of Popper's thinking in straightforward language.

Until I found that I had not seen you actually provide clear text like this, and I wanted to exhort you to write an entire sequence in language with that flavor: clean and clear and lacking in citation. The sequence should be about what "induction" is, and why you think other people believed something about it (even if not perhaps by that old fashioned name), and why you think those beliefs are connected to reliably predictable failures to achieve their goals via cognitively mediated processes.

I feel like maaaybe you are writing a lot about things you have pointers to, but not things that you have held in your hands, used skillfully, and made truly a part of you? Or maybe you are much much smarter and better read than me, so all your jargon makes sense to you and I'm just too ignorant to parse it.

My hope is that you can dereference your pointers and bring all the ideas and arguments into a single document, and clean it up and write it so that someone who had never heard of Popper would think you are really smart for having had all these ideas yourself.

Then you could push one small chapter from this document at a time out into the world (thereby tricking people into reading something piece by piece that they might have skipped if they saw how big it was going to be up front) and then after 10 chapters like this it will turn out that you're a genius and everyone else was wrong and by teaching people to think good you'll have saved the world.

I like people who try to save the world, because it makes me marginally less hopeless, and less in need of palliative cynicism :-)

Comment author: Stuart_Armstrong 22 November 2017 01:33:43PM *  1 point [-]

Thanks! But H is used as an example, not a proof.

And the chessbots actually illustrate my point - is a bishop-retaining chessbot actually intending to retain their bishop, or is it an agent that wants to win, but has a bad programming job which inflates the value of bishops?

Comment author: JenniferRM 23 November 2017 07:51:49PM *  0 points [-]

Initially I wrote a response spelling out in excruciating detail an example of a decent chess bot playing the final moves in a game of Preference Chess, ending with "How does this not reveal an extremely clear example of trivial preference inference, what am I missing?"

Then I developed the theory that what I'm missing is that you're not talking about "how preference inference works" but more like "what are extremely minimalist preconditions for preference inference to get started".

And given where this conversation is happening, I'm guessing that one of the things you can't take for granted is that the agent is at all competent, because sort of the whole point here is to get this to work for a super intelligence looking at a relatively incompetent human.

So even if a Preference Chess Bot has a board situation where it is one move away from winning, losing, or taking another piece that it might prefer to take... no matter what move the bot actually performs you could argue it was just a mistake because it couldn't even understand the extremely short run tournament level consequences of whatever Preference Chess move it made.

So I guess I would argue that even if any specific level of stable state intellectual competence or power can't be assumed, you might be able to get away with a weaker assumption of "online learning"?

It will always be tentative, but I think it buys you something similar to full rationality that is more likely to be usefully true of humans. Fundamentally you could use "an online learning assumption" to infer "regret of poorly chosen options" from repetitions of the same situation over and over, where either similar or different behaviors are observed later in time.

To make the agent have some of the right resonances... imagine a person at a table who is very short and wearing a diaper.

The person's stomach noisily grumbles (which doesn't count as evidence-of-preference at first).

They see in front of them a cupcake and a cricket (the eye's looking at both is somewhat important because it means they could know that a choice is even possible, allowing us to increment the choice event counter here).

They put the cricket in their mouth (which doesn't count as evidence-of-preference at first).

They cry (which doesn't count as evidence-of-preference at first).

However, we repeat this process over and over and notice that by the 50th repetition they are reliably putting the cupcake in their mouth and smiling afterwords. So we use the relatively weak "online learning assumption" to say that something about the cupcake choice itself (or the cupcake's second order consequences that the person may think semi-reliably reliably happens) are more preferred than the cricket.

Also, the earlier crying and later smiling begin to take on significance as either side channel signals of preference (or perhaps they are the actual thing that is really being pursued as a second order consequence?) because of the proximity of the cry/smile actions reliably coming right after the action whose rate changes over time from rare to common.

The development of theories about side channel information could make things go faster as time goes on. It might even becomes the dominant mode of inference, up to the point where it starts to become strategic, as with lying about one's goals in competitive negotiation contexts becoming salient once the watcher and actor are very deep into the process...

However, I think your concern is to find some way to make the first few foundational inferences in a clear and principled way that does not assume mutual understanding between the watcher and the actor, and does not assume perfect rationality on the part of the actor.

So an online learning assumption does seem to enable a tentative process, that focuses on tiny little recurring situations, and the understanding of each of these little situations as a place where preferences can operate causing changes in rates of performance.

If a deeply wise agent is the watcher, I could imagine them attempting to infer local choice tendencies in specific situations and envisioning how "all the apparently preferred microchoices" might eventually chain together into some macro scale behavioral pattern. The watcher might want to leap to a conclusion that the entire chain is preferred for some reason.

It isn't clear that the inference to the preference for the full chain of actions would be justified, precisely because of the assumption of the lack of full rationality.

The watcher would want to see the full chain start to occur in real life, and to become more common over time when chain initiation opportunities presented themselves.

Even then, the watcher might even double check by somehow adding signposts to the actor's environment, perhaps showing the actor pictures of the 2nd, 4th, 8th, and 16th local action/result pairs that it thinks are part of a behavioral chain. The worry is that the actor might not be aware how predictable they are and might not actually prefer all that can be predicted from their pattern of behavior...

(Doing the signposting right would require a very sophisticated watcher/actor relationship, where the watcher had already worked out a way to communicate with the actor, and observed the actor learning that the watcher's signals often functioned as a kind of environmental oracle for how the future could go, with trust in the oracle and so on. These preconditions would all need to be built up over time before post-signpost action rate increases could be taken as a sign that the actor preferred performing the full chain that had been signposted. And still things could be messed up if "hostile oracles" were in the environment such that the actor's trust in the "real oracle" is justifiably tentative.)

One especially valuable kind of thing the watcher might do is to search the action space for situations where a cycle of behavior is possible, with a side effect each time through the loop, and to put this loop and the loop's side effect into the agent's local awareness, to see if maybe "that's the point" (like a loop that causes the accumulation of money, and after such signposting the agent does more of the thing) or maybe "that's a tragedy" (like a loop that causes the loss of money, that might be a dutch booking in progress, and after signposting the agent does less of the thing).

Is this closer to what you're aiming for? :-)

Comment author: JenniferRM 22 November 2017 01:03:32AM *  1 point [-]

Perhaps I'm missing something, but it seems like "agent H" has nothing to do with an actual human, and that the algorithm and environment as given support even less analogy to a human than a thermostat.

Thus, proofs about such a system are of almost no relevance to moral philosophy or agent alignment research?

Thermostats connected to heating and/or cooling systems are my first goto example for asking people where they intuitively experience the perception of agency or goal seeking behavior. I like using thermostats as the starting point because:

  1. Their operation has clear connections to negative feedback loops and thus obvious "goals" because they try to lower the temperature when it is too hot and try to raise the temperature when it is too cold.

  2. They have internally represented goals, because their internal mechanisms can be changed by exogenous-to-the-model factors that change their behavior in response to otherwise identical circumstances. Proximity plus non overlapping ranges automatically lead to fights without any need for complex philosophy.

  3. They have a natural measure of "optimization strength" in the form of the wattage of their heating and cooling systems, which can be adequate or inadequate relative to changes in the ambient temperature.

  4. They require a working measurement component that detects ambient temperature, giving a very limited analogy for "perception and world modeling". If two of thermostats are in a fight, a "weak and fast" thermostat can use a faster sampling rate to get a headstart on the "slower stronger" thermostat that put the temperature where it wanted and then rested for 20 minutes before measuring again. This would predictably give a cycle of temporary small victories for the fast one that turn into wrestling matches that it always loses, over and over.

I personally bite the bullet and grant that thermostats are (extremely minimal) agents with (extremely limited) internal experiences, but I find that most people I talk about this with do not feel comfortable admitting that these might be "any kind of agent".

Yet the thermostat clearly has more going on than "agent H" in your setup.

A lot of people I talk with about this are more comfortable with a basic chess bot architecture than a thermostat, when talking about the mechanics of agency, because:

  1. Chess bots consider more than a simple binary actions.

  2. Chess bots generate iterated tree-like models of the world and perform the action that seems likely to produce the most preferred expected long term consequence.

  3. Chess bots prune possible futures such that they try not to do things that hostile players could exploit now or in the iterated future, demonstrating a limited but pragmatically meaningful theory of mind.

Personally, I'm pretty comfortable saying that chess bots are also agents, and they are simply a different kind of agent than a thermostat, and they aren't even strictly "better" than thermostats because thermostats have a leg up on them in having a usefully modifiable internal representation of their goals, which most chess bots lack!

An interesting puzzle might be how to keep much of the machinery of chess, but vary the agents during the course of their training and development so that they have skillful behavioral dynamics but different chess bot's skills are organized around things like a preference to checkmate the opponent while they still have both bishops, but lower down their hierarchy of preferences is preferring to be checkmated while retaining both bishops versus, and even further down is losing any bishops and also being checkmated.

Imagine a tournament of 100 chess bots where the rules of chess are identical for everyone, but some of the players are in some sense "competing in different games" due to a higher level goal of beating the chess bots that have the same preferences as them. So there might be bishop keepers, bishop hunters, queen keepers, queen hunters, etc.

Part of the tournament rules is that it would not be public knowledge who is in which group (though the parameters of knowledge could be an experimental parameter).

And in a tournament like that I'm pretty sure that any extremely competitive bishop keeping chess bot would find it very valuable to be able to guess from observation of the opponents early moves that in a specific game they might be playing a rook hunting chessbot that would prefer to capture their rook and then be checkmated than to officially "tie the game" without ever capturing one of their rooks.

In a tournament like this, keeping your true preferences secret and inferring your opponent's true preferences would both be somewhat useful.

Some overlap in the game should always exist (like preference for win > tie > lose all else equal) and competition on that dimension would always exist.

Then if any AgentAlice knows AgentBob's true preferences she can probably see deeper into the game tree than otherwise by safely pruning more lines of play out of the tree, and having a better chance of winning. On the other hand mutual revelation of preferences might allow gains from trade, so it isn't instantly clear how to know when to reveal preferences and when to keep them cryptic...

Also, probably chess is more complicated than is conceptually necessary. Qubic (basically tic tac toe on a 4x4x4 grid) probably has enough steps and content to allow room for variations in strategy (liking to have played in corners, or whatever) so that the "preference" aspects could hopefully dominate the effort put into it rather than demanding extensive and subtle knowledge of chess.

Since qubic was solved at least as early as 1992, it should probably be easier to prove things about "qubic with preferences" using the old proofs as a starting point. Also it is probably a good idea to keep in mind which qubic preferences are instrumentally entailed by the pursuit of basic winning, so that preferences inside and outside those bounds get different logical treatment :-)

Comment author: MaryCh 14 November 2017 06:38:08PM 0 points [-]

then what story do you think was not made up?

Comment author: JenniferRM 15 November 2017 12:27:55PM 1 point [-]

If you just Google around there are a lot that hit the keyword that seem well attested.

Most of them are either cases of monstrous parental abuse (plus sometimes proximity to pets of the parent) or else the child was already at least a toddler (and often aged 3-7) when they went into the wild.

It is less surprising when you remember that in typical hunter gatherer societies the age at which children became roughly "calorie self sufficient" (not necessarily good nutrition, but able to gather enough not to starve) was around 4 or 5.

Parental neglect cases often have trouble walking, which is moderate evidence that "walking is cultural" in the sense that we might not have reliable instincts for learning to do it without having any positive examples and/or encouragement. Also these stories tend to support the idea of critical periods in language acquisition.

The ones that are usually hoaxes or gross exaggerations of real facts tend to be stories of very young children (like 0-18 month old babies) being literally raised by animals from scratch with no human input at all.

Comment author: Stuart_Armstrong 31 October 2017 04:36:37PM 1 point [-]

pessimistic zoo theory

I've thought about things like that before, but always dismissed them, not as wrong but as irrelevant - there is nothing that can be done about that, as they would certainly have a fully armed listening post somewhere in the solar system to put us down when the time comes (though the fact they haven't yet is an argument against their existence).

But since there's nothing to be done, I ignore the hypothesis in practice.

Comment author: JenniferRM 03 November 2017 06:19:10AM *  0 points [-]

I see how arguments that "the great filter is extremely strong" generally suggests that any violent resistance against an old race of exterminators is hopeless.

However it seems to me as if the silent sky suggests that everything is roughly equally hopeless. Maybe I'm missing something here, and if so I'd love to be corrected :-)

But starting from this generic evidential base, if everything is hopeless because of the brute fact of the (literally astronomically) large silent sky (with the strength of this evidence blocking nearly every avenue of hope for the future), I'm reasonably OK with allocating some thought to basically every explanation of the silent sky that has a short description length, which I think includes the pessimistic zoo hypothesis...

Thinking about this hypothesis might suggest methods to timelessly coordinate with other "weed species"? And this or other thoughts might suggest new angles on SETI? What might a signal look like from another timelessly coordinating weed species? This sort of thinking seems potentially productive to me...

HOWEVER, one strong vote against discussing the theory is that the pessimistic zoo hypothesis is an intrinsically "paranoid" hypothesis. The entities postulated include an entity of unknown strength that might be using its strength to hide itself... hence: paranoia.

Like all paranoid theories there is a sort of hope function where each non-discovery of easy/simple evidence for the existence of a hostile entity marginally increases both (1) the probability that the entity does not exist, and (2) the probability that if the entity exists it is even better at hiding from you than you had hypothesized when you searched in a simple place with the mild anticipation of seeing it.

At the end of a fruitless but totally comprehensive search of this sort you either believe that the entity does not physically exist, or else you think that it is sort of "metaphysically strong".

The recently popular "Three Body Problem" explores such paranoia a bit with regard to particle physics. Also, the powers seen in the monolith of Clarke's "2001" comes to mind (although that seemed essentially benevolent and weak compared to what might be seen in a fully bleak situation) and Clarke himself coined the phrase claiming "sufficiently advanced technology is indistinguishable from magic" in order partly to justify some of what he wrote as being respectable-enough-for-science-fiction I think.

This brings up a sort of elephant in the room: paranoid hypotheses are often a cognitive tarpit that captures the fancy of the mentally ill and/or theologically inclined people.

The hallmarks of bad thinking here tend to be (1) updating too swiftly in the direction of extreme power on the part of the hidden entity, (2) getting what seem like a lot of false positives when analyzing situations where the entity might have intervened, and (3) using the presumed interventions to confabulate motives.

To discuss a paranoid hypothesis in public risks the speaker becoming confused in the mind of the audience with other people who entertain paranoid hypotheses with less care.

It would make a lot of sense to me to me if respectable thinkers avoid discussing the subject for this reason.

If I was going to work here in public, I think it would be useful to state up front that I'd refrain from speculating about precise motives for silencing weed species like we might be. Also, if I infer extremely strong aliens I'm going hold off on using their inferred strength to explain anything other than astronomy data, and even that only reluctantly.

Also, I'd start by hypothesizing aliens that are extremely weak and similar to conventionally imaginable human technology that might barely be up to the task of suppression, and thoroughly rule that level of power out before incrementing the hypothesized power by a small amount.

Comment author: JenniferRM 28 October 2017 07:46:35AM *  1 point [-]

I'm tempted to suggest that the field of interstellar futurology has two big questions that both have very wide error bars which each, considered one at a time, suggest the need for some other theory (outside the horizon of common reasoning) to produce an answer.

It makes me wonder how plausible it is that these questions are related, and help answer each other:

(1) How many other species are out there for us to meet?

(2) Will we will ever go out there or not?

For the first question, Occam suggests that we consider small numbers like "0" or "1", or else that we consider simple evolutionary processes that can occur everywhere and imply numbers like "many".

Observational evidence (as per Fermi) so far rules out "many".

Our own late-in-the-universe self-observing existence with plausible plans for expansion into space (which makes the answer to the second questions seem like it could be yes) suggests that 0 aliens out there is implausible... so what about just going with 1?

This 1 species would not be "AN alien race" but rather "THE alien race". They would be simply the one minimal other alien race whose existence is very strongly implied by minimal evidence plus logical reasoning.

Looping back to the second question of interstellar futurology (and following Occam and theoretical humility in trying to keep the number of theoretical elements small) perhaps the answer to whether our descendants will be visible in the skies of other species is "no with 99.99% probability" because of THE alien race.

When I hear "the zoo hypothesis" this logically simple version, without lots of details, is what I usually think of: Simply that there is "some single thing" and for some reason it makes the sky empty and forecloses our ever doing anything that would make the sky of another species NOT empty.

However, wikipedia's zoo hypothesis is full of crazy details about politics and culture and how moral progress is going to somehow make every species converge on the one clear moral rule of not being visible to any other species at our stage or below, so somehow we ourselves (and every single other species among the plausibly "many") are also in some sense "THE (culturally convergently universal) species" which is the space civilization that sprouts everywhere and inevitably convergently evolves into maintaining the intergalactic zoo.

Yeah. This is all very nice... but it seems both very detailed and kind of hilariously optimistic... like back when the Soviet Union's working theory was that of course the aliens would be socialist... and then data came in and they refused to give up on the optimism even though it no longer made sense, so they just added more epicycles and kept chugging away.

I'm reminded of the novels of Alastair Reynolds where he calls THE alien race "The Inhibitors".

Reynolds gave them all kinds of decorative details that might be excused by the demand that commercial science fiction have dramatically compelling plots... However one of their details was that they were a galactic rather than intergalactic power. This seems like a really critical strategic fact that can't be written off as a detail added for story drama, and so that detail counts against the science side of his work. Too much detail of the wrong sort!

In the spirit of theoretical completeness, consider pairing the "optimistic zoo theory" with a more "pessimistic zoo theory".

In the pessimistic version THE intergalactic alien race is going to come here and kill us. Our chance of preventing this extermination is basically "the number of stars we see that seem to have been the origin of a a visible and friendly intergalactic civilization (plus one as per Laplace's rule of succession) divided by the number of stars where a civilization with this potential could have developed".

By my count our chance of surviving using this formula would be ((0 + 1) / 10 ^ BIG).

Like if it was you versus a weed in your garden, the weed's chances of surviving you are better than humanity's chances of surviving THE aliens.

Lower? Yes! At least individual weeds evolved under the selection pressure of animal grazing, and individual weeds have a plant genome full of survival wisdom to use to fight a human weeder, who is doing something more or less similar to what grazing animals do.

So the only strong observational argument I can see against the pessimistic zoo theory is that if it were true, then to square with what we see we have to suppose that THE alien weeders would bother with camouflage that SETI can't penetrate.

Consider all the potentially valuable things they could do with the universe that would tip us off right away, and then consider the cost of being visible. Would it be worth it for THE aliens (the first and only old intergalactic alien race) to hide in this way? I would not naively expect it.

Naively I'd have thought that the shape of the galaxy and its contents would be whatever they wanted it to be, and that attempts to model galactic orbital and/or stellar histories would point the finger at a non-obvious causes, with signs of design intent relative to some plausible economic goal. Like this work but with more attention to engineering intent.

So a good argument against this kind of pessimism seems like it would involve calculation of the costs and benefits of visible projects versus the benefits and costs of widespread consistent use of stealth technology.

If stealth is not worth it, then the Inhibitors (or Weeders or whatever you want to call THE aliens) wouldn't bother with hiding and the lack of evidence of their works would be genuine evidence that they don't exist.

maybe you can do something that breaks the symmetry from the timeless decision theory perspective like send a massive signal to the galaxy...

The pessimistic zoo theory makes this proposal seem heroic to me :-)

The hard part here seems like it would be to figure out if there is anything humans can possibly build in the next few decades (or centuries?) that might continue to send a signal for the next 10 million years (in a way we could have detected in the 1970s) and that will continue to function despite THE alien race's later attempts to turn it off after they kill us because it messes up their stealth policy.

My guess is that the probability of an enduring "existence signal" being successfully constructed and then running for long enough to be detected by many other weed species is actually less than the probability that we might survive, because an enduring signal implies a kind of survival.

By contrast, limited "survival" might happen if samples of earth are taken just prior to a basically successful weeding event...

Greg Bear's "The Forge Of God" and sequel "Anvil of Stars" come to mind here. In those books Bear developed an idea that space warfare might be quite similar to submarine warfare, with silence and passive listening being the fundamental rule, most traps and weapons optimized for anonymous or pseudo-natural deployment, and traceable high energy physical attacks with visibly unnatural sources very much the exception.

As with all commercially viable books, you've got to have hope in there somewhere, so Bear populated the sky with >1 camouflaged god-like space civilizations that arrive here at almost precisely the same time, and one of them saves us in a way that sort of respects our agency but it leaves us making less noise than before. This seems optimistic in a way that Occam would justifiably complain about, even as it makes the story more fun for humans to read...

View more: Next