Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Pentashagon 05 December 2015 04:22:48AM 7 points [-]

Religion solves some coordination problems very well. Witness religions outlasting numerous political and philosophical movements, often through coordinated effort. Some wrong beliefs assuage bad emotions and thoughts, allowing humans to internally deal with the world beyond the reach of god. Some of the same wrong beliefs also hurt and kill a shitload of people, directly and indirectly.

My personal belief is that religions were probably necessary for humanity to rise from agricultural to technological societies, and tentatively necessary to maintain technological societies until FAI, especially in a long-takeoff scenario. We have limited evidence that religion-free or wrong-belief-free societies can flourish. Most first-world nations are officially and practically agnostic but have sizable populations of religious people. The nations which are actively anti-religious generally have their own strong dogmatic anti-scientific beliefs that the leaders are trying to push, and they still can't stomp out all religions.

Basically, until doctors can defeat virtually all illness and death and leaders can effectively coordinate global humane outcomes without religions I think that religions serve as a sanity line above destructive hedonism or despair.

Comment author: diegocaleiro 29 November 2015 10:29:49PM 0 points [-]

See the link with a flowchart on 12.

Comment author: Pentashagon 05 December 2015 03:41:21AM 0 points [-]

I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

Comment author: Pentashagon 28 November 2015 11:56:38PM 0 points [-]

Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.

Comment author: Gav 30 September 2015 12:40:31AM 1 point [-]

Also just another thing that might be interesting:

Check out 'intermediate temperature storage', the idea of storing at a slightly warmer than liquid nitrogen temps (-130'C as opposed to -196'C) is a good idea in order to avoid any fracturing*. This is right near the glass transition temp, so no nucleation can proceed.

Tricky part is there aren't any practical scalable chemicals that have a handy phase change near -130'C, (in the same way that liquid nitrogen does at -196'C) so any system to keep patients there would have to be engineered as a custom electrically controlled device, rather than a simple vat of liquid.

Not impossible, but adds a lot of compexity. They might end up doing it in a few years by putting a dewar in a dewar, and making a robust heater that will failsafe down to LN2 if there's any problem.

*Personally I'm not concerned with fracturing, it seems like a very information-preserving change compared to everything else.

Comment author: Pentashagon 01 October 2015 03:21:00AM 0 points [-]

Tricky part is there aren't any practical scalable chemicals that have a handy phase change near -130'C, (in the same way that liquid nitrogen does at -196'C) so any system to keep patients there would have to be engineered as a custom electrically controlled device, rather than a simple vat of liquid.

Phase changes are also pressure dependent; it would be odd if 1 atm just happened to be optimal for cryonics. Presumably substances have different temperature/pressure curves and there might be a thermal/pressure path that avoids ice crystal formation but ends up below the glass transition temperature.

Comment author: Kaj_Sotala 16 September 2015 06:55:57PM 7 points [-]

Similarly for creating 10^100 happy lives, how exactly would you go about doing that in our Universe?

By some alternative theory of physics that has a, say, .000000000000000000001 probability of being true.

Comment author: Pentashagon 19 September 2015 06:09:43AM -1 points [-]

Which particular event has P = 10^-21? It seems like part of the pascal's mugging problem is a type error: We have a utility function U(W) over physical worlds but we're trying to calculate expected utility over strings of English words instead.

Pascal's Mugging is a constructive proof that trying to maximize expected utility over logically possible worlds doesn't work in any particular world, at least with the theories we've got now. Anything that doesn't solve reflective reasoning under probabilistic uncertainty won't help against Muggings promising things from other possible worlds unless we just ignore the other worlds.

Comment author: Yvain 17 September 2015 05:33:50AM *  14 points [-]

I don't know if this solves very much. As you say, if we use the number 1, then we shouldn't wear seatbelts, get fire insurance, or eat healthy to avoid getting cancer, since all of those can be classified as Pascal's Muggings. But if we start going for less than one, then we're just defining away Pascal's Mugging by fiat, saying "this is the level at which I am willing to stop worrying about this".

Also, as some people elsewhere in the comments have pointed out, this makes probability non-additive in an awkward sort of way. Suppose that if you eat unhealthy, you increase your risk of one million different diseases by plus one-in-a-million chance of getting each. Suppose also that eating healthy is a mildly unpleasant sacrifice, but getting a disease is much worse. If we calculate this out disease-by-disease, each disease is a Pascal's Mugging and we should choose to eat unhealthy. But if we calculate this out in the broad category of "getting some disease or other", then our chances are quite high and we should eat healthy. But it's very strange that our ontology/categorization scheme should affect our decision-making. This becomes much more dangerous when we start talking about AIs.

Also, does this create weird nonlinear thresholds? For example, suppose that you live on average 80 years. If some event which causes you near-infinite disutility happens every 80.01 years, you should ignore it; if it happens every 79.99 years, then preventing it becomes the entire focus of your existence. But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Also, a world where people follow this plan is a world where I make a killing on the Inverse Lottery (rules: 10,000 people take tickets; each ticket holder gets paid $1, except a randomly chosen "winner" who must pay $20,000)

Comment author: Pentashagon 19 September 2015 05:36:44AM -1 points [-]

But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Doesn't it actually make sense to put that threshold at the predicted usable lifespan of the universe?

Comment author: Pentashagon 01 September 2015 04:13:35AM 0 points [-]

There are many models; the model of the box which we simulate and the AI's models of the model of the box. For this ultimate box to work there would have to be a proof that every possible model the AI could form contains at most a representation of the ultimate box model. This seems at least as hard as any of the AI boxing methods, if not harder because it requires the AI to be absolutely blinded to its own reasoning process despite having a human subject to learn about naturalized induction/embodiment from.

It's tempting to say that we could "define the AI's preferences only over the model" but that implies a static AI model of the box-model that can't benefit from learning or else a proof that all AI models are restricted as above. In short, it's perfectly fine to run a SAT-solver over possible permutations of the ultimate box model trying to maximize some utility function but that's not self-improving AI.

Comment author: PhilGoetz 06 August 2015 03:28:23AM *  0 points [-]

In this particular case, no. Not with the page table attack. What would help would be encrypting the mapping from virtual memory to physical memory--but that would GREATLY slow down execution speed.

I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.

Comment author: Pentashagon 16 August 2015 09:05:25AM *  1 point [-]

I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.

A fully homomorphic encryption scheme for single-bit plaintexts (as in Gentry's scheme) gives us:

  • For each public key K a field F with efficient arithmetic operations +F and *F.
  • Encryption function E(K, p) = c: p∈{0,1}, c∈F
  • Decryption function D(S, c) = p: p∈{0,1}, c∈F where S is the secret key for K.
  • Homomorphisms E(K, a) +F E(K, b) = E(K, a ⊕ b) and E(K, a) *F E(K, b) = E(K, a * b)
  • a ⊕ b equivalent to XOR over {0,1} and a * b equivalent to AND over {0,1}

Boolean logic circuits of arbitrary depth can be built from the XOR and AND equivalents allowing computation of arbitrary binary functions. Let M∈{0,1}^N be a sequence of bits representing the state of a bounded UTM with an arbitrary program on its tape. Let binary function U(M): {0,1}^N -> {0,1}^N compute the next state of M. Let E(K, B) and D(S, E) also operate element-wise over sequences of bits and elements of F, respectively. Let UF be the set of logic circuits equivalent to U (UFi calculates the ith bit of U's result) but with XOR and AND replaced by +F and *F. Now D(S, UF^t(E(K, M)) = U^t(M) shows that an arbitrary number of UTM steps can be calculated homomorphically by evaluating equivalent logic circuits over the homomorphically encrypted bits of the state.

Comment author: James_Miller 11 August 2015 03:50:43PM 9 points [-]

Harvest organs from living, healthy, poor donors. Go to a poor country and find batches of 100 people earning $1 a day who are willing to give up their lives with probability .01 in return for 20 years wages. You will have to pay out $730,000 per batch, but in return you get the healthy organs from a living donor which should be worth a lot more than this. Run the operation as a charity to reduce negative publicity, and truthfully stress that the prime goal of the charity is to help the poorest of the poor.

Comment author: Pentashagon 14 August 2015 06:35:41AM 1 point [-]

Fly the whole living, healthy, poor person to the rich country and replace the person who needs new organs. Education costs are probably less than the medical costs, but probably it's wise to also select for more intelligent people from the poor country. With an N-year pipeline of such replacements there's little to no latency. This doesn't even require a poor country at all; just educate suitable replacements from the rich country and keep them healthy.

Comment author: Thomas 13 August 2015 09:28:39AM 1 point [-]

To avoid the elevation to say Denver, you have to have a "basement" about 1600 meters down. And the port in the basement.

No such a big problem, you have some deeper mines in the world.

Comment author: Pentashagon 14 August 2015 06:13:02AM 0 points [-]

You save energy not lifting a cargo ship 1600 meters, but you spend energy lifting the cargo itself. If there are rivers that can be turned into systems of locks it may be cheaper to let water flowing downhill do the lifting for you. Denver is an extreme example, perhaps.

View more: Next