Posts

Sorted by New

Wiki Contributions

Comments

Religion solves some coordination problems very well. Witness religions outlasting numerous political and philosophical movements, often through coordinated effort. Some wrong beliefs assuage bad emotions and thoughts, allowing humans to internally deal with the world beyond the reach of god. Some of the same wrong beliefs also hurt and kill a shitload of people, directly and indirectly.

My personal belief is that religions were probably necessary for humanity to rise from agricultural to technological societies, and tentatively necessary to maintain technological societies until FAI, especially in a long-takeoff scenario. We have limited evidence that religion-free or wrong-belief-free societies can flourish. Most first-world nations are officially and practically agnostic but have sizable populations of religious people. The nations which are actively anti-religious generally have their own strong dogmatic anti-scientific beliefs that the leaders are trying to push, and they still can't stomp out all religions.

Basically, until doctors can defeat virtually all illness and death and leaders can effectively coordinate global humane outcomes without religions I think that religions serve as a sanity line above destructive hedonism or despair.

I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

Is it reasonable to say that what really matters is whether there's a fast or slow takeoff? A slow takeoff or no takeoff may limit us to EA for the indefinite future, and fast takeoff means transhumanism and immortality are probably conditional on and subsequent to threading the narrow eye of the FAI needle.

Tricky part is there aren't any practical scalable chemicals that have a handy phase change near -130'C, (in the same way that liquid nitrogen does at -196'C) so any system to keep patients there would have to be engineered as a custom electrically controlled device, rather than a simple vat of liquid.

Phase changes are also pressure dependent; it would be odd if 1 atm just happened to be optimal for cryonics. Presumably substances have different temperature/pressure curves and there might be a thermal/pressure path that avoids ice crystal formation but ends up below the glass transition temperature.

Which particular event has P = 10^-21? It seems like part of the pascal's mugging problem is a type error: We have a utility function U(W) over physical worlds but we're trying to calculate expected utility over strings of English words instead.

Pascal's Mugging is a constructive proof that trying to maximize expected utility over logically possible worlds doesn't work in any particular world, at least with the theories we've got now. Anything that doesn't solve reflective reasoning under probabilistic uncertainty won't help against Muggings promising things from other possible worlds unless we just ignore the other worlds.

But it seems nonsensical for your behavior to change so drastically based on whether an event is every 79.99 years or every 80.01 years.

Doesn't it actually make sense to put that threshold at the predicted usable lifespan of the universe?

There are many models; the model of the box which we simulate and the AI's models of the model of the box. For this ultimate box to work there would have to be a proof that every possible model the AI could form contains at most a representation of the ultimate box model. This seems at least as hard as any of the AI boxing methods, if not harder because it requires the AI to be absolutely blinded to its own reasoning process despite having a human subject to learn about naturalized induction/embodiment from.

It's tempting to say that we could "define the AI's preferences only over the model" but that implies a static AI model of the box-model that can't benefit from learning or else a proof that all AI models are restricted as above. In short, it's perfectly fine to run a SAT-solver over possible permutations of the ultimate box model trying to maximize some utility function but that's not self-improving AI.

I don't think the "homomorphic encryption" idea works as advertised in that post--being able to execute arithmetic operations on encrypted data doesn't enable you to execute the operations that are encoded within that encrypted data.

A fully homomorphic encryption scheme for single-bit plaintexts (as in Gentry's scheme) gives us:

  • For each public key K a field F with efficient arithmetic operations +F and *F.
  • Encryption function E(K, p) = c: p∈{0,1}, c∈F
  • Decryption function D(S, c) = p: p∈{0,1}, c∈F where S is the secret key for K.
  • Homomorphisms E(K, a) +F E(K, b) = E(K, a ⊕ b) and E(K, a) F E(K, b) = E(K, a b)
  • a ⊕ b equivalent to XOR over {0,1} and a * b equivalent to AND over {0,1}

Boolean logic circuits of arbitrary depth can be built from the XOR and AND equivalents allowing computation of arbitrary binary functions. Let M∈{0,1}^N be a sequence of bits representing the state of a bounded UTM with an arbitrary program on its tape. Let binary function U(M): {0,1}^N -> {0,1}^N compute the next state of M. Let E(K, B) and D(S, E) also operate element-wise over sequences of bits and elements of F, respectively. Let UF be the set of logic circuits equivalent to U (UFi calculates the ith bit of U's result) but with XOR and AND replaced by +F and *F. Now D(S, UF^t(E(K, M)) = U^t(M) shows that an arbitrary number of UTM steps can be calculated homomorphically by evaluating equivalent logic circuits over the homomorphically encrypted bits of the state.

Fly the whole living, healthy, poor person to the rich country and replace the person who needs new organs. Education costs are probably less than the medical costs, but probably it's wise to also select for more intelligent people from the poor country. With an N-year pipeline of such replacements there's little to no latency. This doesn't even require a poor country at all; just educate suitable replacements from the rich country and keep them healthy.

You save energy not lifting a cargo ship 1600 meters, but you spend energy lifting the cargo itself. If there are rivers that can be turned into systems of locks it may be cheaper to let water flowing downhill do the lifting for you. Denver is an extreme example, perhaps.

Load More