Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: CarlShulman 02 December 2016 12:26:05AM 3 points [-]

A different possibility is identifying vectors in Facebook-behavior space, and letting users alter their feeds accordingly, e.g. I might want to see my feed shifted in the direction of more intelligent users, people outside the US, other political views, etc. At the individual level, I might be able to request a shift in my feed in the direction of individual Facebook friends I respect (where they give general or specific permission).

Comment author: James_Miller 22 November 2016 04:42:07AM 2 points [-]

Isn't this insanely dangerous? Couldn't bacteria immune to viruses out-compete all other bacteria and destroy most of earth's biosphere?

Comment author: CarlShulman 24 November 2016 05:08:50AM 3 points [-]

That advantage only goes so far:

  • Plenty of nonviral bacteria-eating entities exist, and would become more numerous
  • Plant and antibacterial defenses aren't viral-based
  • For the bacteria to compete in the same niche as unmodified versions it has to fulfill a similar ecological role: photosynthetic cyanobacteria with altered DNA would still produce oxygen and provide food
  • It couldn't benefit from exchanging genetic material with other kinds of bacteria
Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: CarlShulman 07 October 2016 12:19:07AM 5 points [-]

Primates and eukaryotes would be good.

Comment author: CarlShulman 16 July 2016 05:35:24PM *  8 points [-]

Your example has 3 states: vanilla, chocolate, and neither.

But you only explicitly assigned utilities to 2 of them, although you implicitly assigned the state of 'neither' a utility of 0 initially. Then when you applied the transformation to vanilla and chocolate you didn't apply it to the 'neither' state, which altered preferences for gambles over both transformed and untransformed states.

E.g. if we initially assigned u(neither)=0 then after the transformation we have u(neither)=4, u(vanilla)=7, u(chocolate)=12. Then an action with a 50% chance of neither and 50% chance of chocolate has expected utility 8, while the 100% chance of vanilla has expected utility 7.

Comment author: Stuart_Armstrong 18 September 2015 12:01:22PM 0 points [-]

This is supposed to be a toy model of excessive simplicity. Do you have suggestions for improving it (for purposes of presenting to others)?

Comment author: CarlShulman 18 September 2015 03:31:48PM 1 point [-]

Maybe explain how it works when being configured, and then stops working when B gets a better model of the situation/runs more trial-and-error trials?

Comment author: CarlShulman 17 September 2015 07:15:31PM *  6 points [-]

An illustration with a game-playing AI, see 15:50 and after in the video. The system has a reward function based on bytes in memory, which leads it to pause the game forever when it is about to lose.

Comment author: Stuart_Armstrong 17 September 2015 06:34:54AM *  5 points [-]

Maybe the easiest way of generalising this is programming B to put 1 block in the hole, but, because B was trained in a noisy environment, it gives only a 99.9% chance of the block being in the hole if it observes that. Then six blocks in the hole is higher expected utility, and we get the same behaviour.

Comment author: CarlShulman 17 September 2015 06:02:50PM *  1 point [-]

That still involves training it with no negative feedback error term for excess blocks (which would overwhelm a mere 0.1% uncertainty).

Comment author: CarlShulman 17 September 2015 03:02:48AM 2 points [-]

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment author: HungryHobo 29 July 2015 10:11:44AM 0 points [-]

yes, 1 is equivalent to an early filter.

2 would be somewhat surprising since there's no physical law that disallows it.

3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.

4 is sort of a re-phrasing of 1.

5 is possible but implies some strong reason many would all reliably choose the same options.

For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business?

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Comment author: CarlShulman 30 July 2015 12:05:13AM *  0 points [-]

1 is early filter meaning before our current state, #4 would be around or after our current state.

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.

I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.

Comment author: HungryHobo 27 July 2015 05:08:14PM *  1 point [-]

I may not have been clear, by UFAI I didn't just mean an AI which just trashes the home planet of the civilization that creates it and then stops but rather one which then continues to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should assuming it's a non-trivial risk.

Which either argues for AI-risk not being so risky or for an early filter.

Comment author: CarlShulman 28 July 2015 10:50:12PM *  1 point [-]

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

View more: Next