Comment author: CellBioGuy 11 March 2016 08:42:43PM *  2 points [-]

This overstates the case. If planets like Earth were very rare in ways that didn't change much with time you'd still see a time that was typical. One can imagine some things we have a sample size of one for being rare in ways that don't have anything to do with star order - origins of eukaryotes, plate tectonics, oxygenic photosynthesis...

This being said, I think the sheer DEGREE of rare Earth being implied by turchin and others is still very unlikely, even though there's a whole lot that we have little information on. It remains a not fully excluded possibility, but there are a hell of a lot of others.

Comment author: jacob_cannell 12 March 2016 05:17:05AM 0 points [-]

If planets like Earth were very rare in ways that didn't change much with time you'd still see a time that was typical

The time measurement is not the only rank measurement we have. We also can compare the sun vs other stars, and it is mediocre across measurements.

Rarity requires an (intrinsically unlikely, ala solomonoff) mechanism - something unusual that happened at some point in the developmental process, and most such mechanisms would entangle with multiple measurements.

At this point in time we can pretty much rule out all mechanisms operating at the stellar scale, it would have to be something far more local.

Tectonics as rare has been disproven recently. Europa was recently shown to have active tectonics, possibly pluto, and probably mars at least at some point.

For later evolutionary development stuff, it will be awhile before we have any data for rank measurements. But given how every other measurement so far has come up as mediocre . . ..

We can learn alot actually from exploring europa, mars, and other spots that could/should have some evidence for at least simple life. That can help fit at least a simple low complexity model for typical planetary development.

Comment author: CellBioGuy 11 March 2016 08:48:31PM *  4 points [-]

I raise my standard point, that there is a huge insufficiently explored possibility space in which lack of interstellar expansion is neither a choice nor indicative of destruction/failure to form in the first place, but merely something that is not practically possible with anything reliably self replicating in the messy real world. Perhaps we must revisit the assumption of increasing mastery over the physical world not having an upper bound below that point.

Comment author: jacob_cannell 12 March 2016 05:01:06AM 0 points [-]

It's not binary of course, there's a feasibility spectrum that varies with speed. On the low end there is a natural speed for slow colonization which requires very little energy/effort, which is colonization roughly at the speed of star orbits around the galaxy. That would take hundreds of millions of years, but it could use gravitational assists and we already have the tech. Indeed, biology itself could perhaps manage slow colonization.

Given that the galaxy is already 54 galactic-years old, if life is actually as plentiful as mediocrity suggests, then the 'too hard' explanation can't contain much probability mass - as the early civs should have arose quite some time ago.

I find it more likely that the elder civs already have explored, and that the galaxy is already 'colonized'. It is unlikely that advanced civs are stellavores. The high value matter/energy or real estate is probably a tiny portion of the total, and is probably far from stars, as stellar environments are too noisy/hot for advanced computation. We have little hope of finding them until after our own maturation to some post-singularity state.

Comment author: turchin 11 March 2016 05:00:39PM 1 point [-]

I take it as strong evidence for Rare earth.

Another interpretation may be that early universe is full of x-ray bursts, and later is full of aliens preventing newborns.

Comment author: jacob_cannell 11 March 2016 07:08:32PM 0 points [-]

I take it as strong evidence for Rare earth.

It's the exact opposite.

If the earth was rare, this rarity would show up in the earth's rank along many measurement dimensions. Rarity requires selection pressure - a filter - which alters the distribution. We don't see that at all. Instead we see no filtering, no unusual rank in the dimensions we can measure. The exact opposite is far more likely true - the earth is common.

For instance, say that the earth was rare in orbiting a rare type of star. Then we would see that the sun would have unusual rank along many dimensions. Instead it is normal/typical - in brightness, age, type, planets, etc.

Comment author: jacob_cannell 11 March 2016 05:42:58AM *  3 points [-]

I take this as another sign favoring transcension over expansion, and also weird-universes.

The standard dev model is expansion - habitable planets lead to life leads to intelligence leads to tech civs which then expand outward.

If the standard model was correct, barring any wierd late filter, then the first civ to form in each galaxy would colonize the rest and thus preclude other civs from forming.

Given that the strong mediocrity principle holds - habitable planets are the norm, life is probably the norm, enormous expected number of bio worlds, etc, if the standard model is correct than most observers will find themselves on an unusually early planet - because the elder civs prevent late civs from forming.

But that isn't the case, so that model is wrong. In general it looks like a filter is hard to support, given how strongly all the evidence has lined up for mediocrity, and the inherent complexity penalty.

Transcension remains as a viable alternative. Instead of expanding outward, each civ progresses to a tech singularity and implodes inward, perhaps by creating new baby universes, and perhaps using that to alter the distribution over the multiverse, and thus gaining the ability to effectively alter physics (as current models of baby universe creation suggest the parent universe has some programming level control over the physics of the seed). This would allow exponential growth to continue, which is enormously better than expansion which only provides polynomial growth. So everyone does this if it's possible. Furthermore, if it's possible anywhere in the multiverse, then those pockets expand faster, and thus they was and will dominate everywhere. So if that's true the multiverse has/will be edited/restructured/shaped by (tiny, compressed, cold, invisible) gods.

Barring transcension wierdness, another possibility is that the multiverse is somehow anthropic tuned for about 1 civ per galaxy, and galaxy size is cotuned for this, as it provides a nice sized niche for evolution, similar to the effect of continents/island distributions on the earth scale. Of course, this still requires a filter, which has a high complexity penalty.

Comment author: [deleted] 10 March 2016 11:37:50PM 0 points [-]

If you are assuming that a neuron contributes less than 2 bits of state (or 1 bit per 500 synapses) and 1 computation per cycle, then you know more about neurobiology than anyone alive.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: jacob_cannell 11 March 2016 05:19:20AM 0 points [-]

I don't understand your statement.

I didn't say anything in my post above about the per neuron state - because it's not important. Each neuron is a low precision analog accumulator, roughly up to 8-10 bits ish, and there are 20 billion neurons in the cortex. There are another 80 billion in the cerebellum, but they are unimportant.

The memory cost of storing the state for an equivalent ANN is far less than than 20 billion bytes or so, because of compression - most of that state is just zero most of the time.

In terms of computation per neuron per cycle, when a neuron fires it does #fanout computations. Counting from the total synapse numbers is easier than estimating neurons * avg fanout, but gives the same results.

When a neuron doesn't fire .. .it doesn't compute anything of significance. This is true in the brain and in all spiking ANNs, as it's equivalent to sparse matrix operations - where the computational cost depends on the number of nonzeros, not the raw size.

Comment author: [deleted] 09 March 2016 11:20:00PM 0 points [-]

The whole issue is whether a hard takeoff is possible and/or plausible, presumably with currently available computing technology. Certainly with Landauer-limit computing technology it would be trivial to simulate billions of human minds in the space and energy usage of a single biological brain. If such technology existed, yes a hard takeoff as measured from biological-human scale would be an inevitability.

But what about today's technology? The largest supercomputers in existence can maaaaybe simulate a single human mind at highly reduced speed and with heavy approximation. A single GPU wouldn't even come close in either storage or processing capacity. The human brain has about 100bn neurons and operates at 100Hz. The NVIDIA Tesla K80 has 8.73TFLOPS single-precision performance with 24GB of memory. That's 1.92bits per neuron and 0.87 floating point operations per neuron-cycle. Sorry, no matter how you slice it, neurons are complex things that interact in complex ways. There is just no possible way to do a full simulation with ~2 bits per neuron and ~1 flop per neuron-cycle. More reasonable assumptions about simulation speed and resource requirements demand supercomputers on the order of approximately the largest we as a species have in order to do real-time whole-brain emulations. And if such a thing did exist, it's not "trivially easy" to expand its own computation power -- it's already running on the fastest stuff in existence!

So with today's technology, any AI takeoff is likely to be a prolonged affair. This is absolutely certain to be the case if whole-brain emulation is used. So should hard-takeoffs be a concern? Not in the next couple of decades at least.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: jacob_cannell 10 March 2016 11:04:37PM *  2 points [-]

The human brain has about 100bn neurons and operates at 100Hz. The NVIDIA Tesla K80 has 8.73TFLOPS single-precision performance with 24GB of memory. That's 1.92bits per neuron and 0.87 floating point operations per neuron-cycle. Sorry, no matter how you slice it, neurons are complex things that interact in complex ways. There is just no possible way to do a full simulation with ~2 bits per neuron and ~1 flop per neuron-cycle

You are assuming enormously suboptimal/naive simulation. Sure if you use a stupid simulation algorithm, the brain seems powerful.

As a sanity check, apply your same simulation algorithm to simulating the GPU itself.

It has 8 billion transistors that cycle at 1 ghz, with a typical fanout of 2 to 4. So that's more than 10^19 gate ops/second! Far more than the brain . ..

The brain has about 100 trillion synapses, and the average spike rate is around 0.25hz (yes, really). So that's only about 25 trillion synaptic events/second. Furthermore, the vast majority of those synapses are tiny and activate on an incoming spike with low probability around 25% to 30% or so (stochastic connection dropout). The average synapse has an SNR equivalent of 4 bits or less. All of these numbers are well-supported from the neuroscience lit.

Thus the brain as a circuit computes with < 10 trillion low bit ops/second. That's nothing, even if it's off by 10x.

Also, synapse memory isn't so much an issue for ANNs, as weights are easily compressed 1000x or more by various schemes, from simple weight sharing to more complex techniques such as tensorization.

As we now approach moore's law, our low level circuit efficiency has already caught up to the brain, or is it close. The remaining gap is almost entirely algorithmic level efficiency.

Comment author: [deleted] 07 March 2016 09:35:08PM *  0 points [-]

as soon as you get one adult, human level AGI running compactly on a single GPU

Citation on plausibility severely needed, which is the point.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: jacob_cannell 09 March 2016 04:26:20AM 0 points [-]

While that particular discussion is quite interesting, it's irrelevant to my point above - which is simply that once you achieve parity, it's trivially easy to get at least weak superhuman performance through speed.

Comment author: [deleted] 05 March 2016 07:55:17AM *  2 points [-]

If you have any references please do provide them. I honestly don't know if there is a good write up anywhere, and I haven't the time or inclination to write one myself. Especially as it would require a very long tutorial overview of the inner workings of modern approaches to AGI to adequately explain why running a human level AGI is such a resource intensive proposal.

The tl;dr is what I wrote: learning cycles would be hours or days, and a foom would require hundreds or thousands of learning cycles at minimum. There is just no plausible way for an intelligence to magic itself to super intelligence in less than large human timescales. I don't know how to succinctly explain that without getting knee deep in AI theory though.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: jacob_cannell 07 March 2016 09:15:53PM 2 points [-]

The tl;dr is what I wrote: learning cycles would be hours or days, and a foom would require hundreds or thousands of learning cycles at minimum.

Much depends on what you mean by "learning cycle" - do you mean a complete training iteration (essentially a lifetime) of an AGI? Grown from seed to adult?

I'm not sure where you got the 'hundreds to thousands' of learning cycles from either. If you want to estimate the full experimental iteration cycle count, it would probably be better to estimate from smaller domains. Like take vision - how many full experimental cycles did it take to get to current roughly human-level DL vision?

It's hard to say exactly, but it is roughly on the order of 'not many' - we achieved human-level vision with DL very soon after the hardware capability arrived.

If we look in the brain, we see that vision is at least 10% of the total computational cost of the entire brain, and the brain uses the same learning mechanisms and circuit patterns to solve vision as it uses to solve essentially everything else.

Likewise, we see that once we (roughly kindof) solved vision in the very general way the brain does, we see that same general techniques essentially work for all other domains.

There is just no plausible way for an intelligence to magic itself to super intelligence in less than large human timescales.

Oh thats easy - as soon as you get one adult, human level AGI running compactly on a single GPU, you can then trivially run it 100x faster on a supercomputer, and or replicate it 1 million fold or more. That generation of AGI then quickly produces the next, and then singularity.

It's slow going until we get up to that key threshold of brain compute parity, but once you pass that we probably go through a phase transition in history.

Comment author: [deleted] 05 March 2016 08:53:27AM *  2 points [-]

That's a terrible argument. AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space. Real life is far more combinatorial still, and an AGI requires much more expensive meta-level repeated cognition as well. You don't just solve one problem, you also look at all past solved problems and think about his you could have solved those better. That's quadratic blowup.

Tl;Dr speed of narrow AI != speed of general AI.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: jacob_cannell 07 March 2016 09:04:20PM 3 points [-]

AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space ..

Sure, but that wasn't my point. I was addressing key questions of training data size, sample efficiency, and learning speed. At least for Go, vision, and related domains, the sample efficiency of DL based systems appears to be approaching that of humans. The net learning efficiency of the brain is far beyond current DL systems in terms of learning per joule, but the gap in terms of learning per dollar is less, and closing quickly. Machine DL systems also easily and typically run 10x or more faster than the brain, and thus learn/train 10x faster.

Comment author: James_Miller 06 March 2016 12:44:39AM 5 points [-]

Consider three types of universes: Those where life never develops, those where life develops and there is no great filter and so paperclip maximizers quickly make it impossible for new life to develop after a short period, and those where life develops and there is a great filter that destroy civilizations before paperclip maximizers get going. Most observers like us will live in the third type of universe. And almost everyone who thinks about anthropics will live at a time close to when the great filter hits.

Comment author: jacob_cannell 07 March 2016 08:49:26PM 0 points [-]

Consider three types of universes ...

Your are privileging your hypothesis - there are vastly more types of universes ...

There are universes where life develops and civilizations are abundant, and all of our observations to date are compatible with the universe being filled with advanced civs (which probably become mostly invisible to us given current tech as they approach optimal physical configurations of near zero temperature and tiny size).

The are universes like the above where advanced civs spawn new universes to gain god-like 'magic' anthropic powers, effectively manipulating/rewriting the laws of physics.

Universes in these categories are both more aggressive/capable replicators - they create new universes at a higher rate, so they tend to dominate any anthropic distribution.

And finally, there are considerations where the distribution over simulation observer moments diverges significantly from original observer moments, which tends to complicate these anthropic considerations.

For example, we could live in a universe with lots of civs, but they tend to focus far more simulations on the origins of the first civ or early civs.

View more: Prev | Next