Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: CarlShulman 07 October 2016 12:19:07AM 5 points [-]

Primates and eukaryotes would be good.

Comment author: CarlShulman 16 July 2016 05:35:24PM *  8 points [-]

Your example has 3 states: vanilla, chocolate, and neither.

But you only explicitly assigned utilities to 2 of them, although you implicitly assigned the state of 'neither' a utility of 0 initially. Then when you applied the transformation to vanilla and chocolate you didn't apply it to the 'neither' state, which altered preferences for gambles over both transformed and untransformed states.

E.g. if we initially assigned u(neither)=0 then after the transformation we have u(neither)=4, u(vanilla)=7, u(chocolate)=12. Then an action with a 50% chance of neither and 50% chance of chocolate has expected utility 8, while the 100% chance of vanilla has expected utility 7.

Comment author: Stuart_Armstrong 18 September 2015 12:01:22PM 0 points [-]

This is supposed to be a toy model of excessive simplicity. Do you have suggestions for improving it (for purposes of presenting to others)?

Comment author: CarlShulman 18 September 2015 03:31:48PM 1 point [-]

Maybe explain how it works when being configured, and then stops working when B gets a better model of the situation/runs more trial-and-error trials?

Comment author: CarlShulman 17 September 2015 07:15:31PM *  6 points [-]

An illustration with a game-playing AI, see 15:50 and after in the video. The system has a reward function based on bytes in memory, which leads it to pause the game forever when it is about to lose.

Comment author: Stuart_Armstrong 17 September 2015 06:34:54AM *  5 points [-]

Maybe the easiest way of generalising this is programming B to put 1 block in the hole, but, because B was trained in a noisy environment, it gives only a 99.9% chance of the block being in the hole if it observes that. Then six blocks in the hole is higher expected utility, and we get the same behaviour.

Comment author: CarlShulman 17 September 2015 06:02:50PM *  1 point [-]

That still involves training it with no negative feedback error term for excess blocks (which would overwhelm a mere 0.1% uncertainty).

Comment author: CarlShulman 17 September 2015 03:02:48AM 2 points [-]

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment author: HungryHobo 29 July 2015 10:11:44AM 0 points [-]

yes, 1 is equivalent to an early filter.

2 would be somewhat surprising since there's no physical law that disallows it.

3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.

4 is sort of a re-phrasing of 1.

5 is possible but implies some strong reason many would all reliably choose the same options.

For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business?

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Comment author: CarlShulman 30 July 2015 12:05:13AM *  0 points [-]

1 is early filter meaning before our current state, #4 would be around or after our current state.

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.

I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.

Comment author: HungryHobo 27 July 2015 05:08:14PM *  1 point [-]

I may not have been clear, by UFAI I didn't just mean an AI which just trashes the home planet of the civilization that creates it and then stops but rather one which then continues to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should assuming it's a non-trivial risk.

Which either argues for AI-risk not being so risky or for an early filter.

Comment author: CarlShulman 28 July 2015 10:50:12PM *  1 point [-]

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

Comment author: HungryHobo 27 July 2015 01:13:00PM 0 points [-]

There's also the UFAI-Fermi-paradox: If AI is a significant risk and if intelligent life is common why aren't we all paperclips already? AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.

There's also the anthropic principle. We probably wouldn't be alive to ask the question in a universe where the earth has been turned into a strip-mall for aliens.

Though combining the anthropic principle with the Drake equation gives us another possibility.

Compute the Drake equation for a cone 60 light years thick extending back through time outwards from earth. x billion planets with y civilisations with z probability of producing a UFAI. Which gives you a rough estimate for the chances of someone elses UFAI killing us all within your lifetime.

Comment author: CarlShulman 27 July 2015 04:36:25PM 2 points [-]

There's also the UFAI-Fermi-paradox:

This is just the regular Fermi paradox/Great Filter. If AI has any impact, it's that it may make space colonization easier. But what's important for that is that eventually industrial civilizations will develop AI (say in a million years). Whether the ancient aliens would be happy with the civilization that does the colonizing is irrelevant (i.e. UFAI/FAI) to the Filter.

You could also have the endotherm-Fermi-paradox, or the hexapodal-Fermi-paradox, or the Klingon-Great-Filter, but there is little to be gained by slicing up the Filter in that way.

Comment author: CarlShulman 26 July 2015 05:04:42PM *  6 points [-]

Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

I have been wanting better stats on this for a while. Basically, what percentage of the eventual sum of potential-for-life-weighted habitable windows (undisturbed by technology) comes from small red dwarfs that can exist far longer than our sun, offsetting long stellar lifetimes with the various (nasty-looking) problems? ETA: wikipedia article. And how robust is the evidence?

View more: Next