Rebutting radical scientific skepticism

17 asr 30 April 2014 07:40PM

Suppose you distrusted everything you had ever read about science. How much of modern scientific knowledge could you verify for yourself, using only your own senses and the sort of equipment you could easily obtain?  How about if you accept third-party evidence when many thousands of people can easily check the facts?

 

continue reading »

an ethical puzzle about brain emulation

14 asr 13 December 2013 09:53PM

I've been thinking about ethics and brain emulations for a while and now have realized I am confused.  Here are five scenarios. I am pretty sure the first is morally problematic, and pretty sure the last is completely innocuous. But I can't find a clean way to partition the intermediate cases.

 

A) We grab John Smith off the street, scan his brain, torture him, and then by some means, restore him to a mental and physical state as though the torture never happened.

 

B)  We scan John Smith's brain, and then run a detailed simulation of the brain being tortured for ten seconds, and over again. If we attached appropriate hardware to the appropriate simulated neurons, we would hear the simulation screaming.

 

C) We store, on disk, each timestep of the simulation in scenario B. Then we sequentially load each timestep into memory, and overwrite it. 

 

D) The same as C, except that each timestep is encrypted with a secure symmetric cipher, say, AES. The key used for encryption has been lost. (Edit: The key length is much smaller than the size of the stored state and there's only one possible valid decryption.)

 

E) The same as D, except we have encrypted each timestep with a one time pad.

 

I take for granted that scenario A is bad: one oughtn't be inflicting pain, even if there's no permanent record or consequence of the pain.  And I can't think of any moral reason to distinguish a supercomputer simulation of a brain from the traditional implementation made of neurons and synapses. So that says that B should be equally immoral.

 

Scenario C is just B with an implementation tweak -- instead of _calculating_ each subsequent step, we're just playing it back from storage. The simulated brain has the same sequence of states as in B and the same outputs.

 

Scenario D is just C with a different data format.  

 

Scenario E is just D with a different encryption.

 

Now here I am confused. Scenario E is just repeatedly writing random bytes to memory. This cannot possibly have any moral significance!  D and E are indistinguishable to any practical algorithm. (By definition, secure encryption produces bytes that "look random" to any adversary that doesn't know the key). 

 

Either torture in case A is actually not immoral or two of these adjacent scenarios are morally distinct. But none of those options seem appealing.  I don't see a simple clean way to resolve the paradox here. Thoughts?

 

As an aside: Scenarios C,D and E aren't so far beyond current technology as you might expect.  Wikipedia tells me that the brain has ~120 trillion synapses.  Most of the storage cost will be the per-timestep data, not the underlying topology. If we need one byte per synapse per timestep, that's 120TB/timestep. If we have a timestep every millisecond, that's 120 PB/second. That's a lot of data, but it's not unthinkably beyond what's commercially available today, So this isn't a Chinese-Room case where the premise can't possibly be realized, physically.

 

 

What would defuse unfriendly AI?

3 asr 10 June 2011 07:27AM

It seems to be a widely held belief around here that unfriendly artificial general intelligence is dangerous, and that (provably) friendly artificial general intelligence is the soundest counter to it.

But I'd like to see some analysis of alternatives.  Here are some possible technical developments. Would any of these defuse the threat? How much would they help?

  • A tight lower bound on the complexity of SAT instances. Suppose that P != NP, and that researchers develop an algorithm for solving instances of Boolean satisfiability that is optimal in terms of asymptotic complexity, but far from efficient in practice.
  • The opposite: a practical, say, quartic-time SAT algorithm, again with a proof that the algorithm is optimal.
  • High-quality automated theorem proving technology, that's not self-modifying except in very narrow ways.
  • Other special-purpose 'AI', such as high-quality natural-language processing algorithms that aren't self-modifying or self-aware. For example, suppose we were able to do language-to-language translation as well as bilingual but not-very-smart humans.
  • Robust tools for proving security properties of complex programs. "This program can only produce output in the following format or with the following properties and cannot disable or tamper with the reference monitor or operating systems."


Are there other advances in computer science that might show up within the next twenty years, that would make friendly-AI much less interesting?

Would anything on this list be dangerous?  Obviously, efficient algorithms for NP-complete problems would be very disruptive. Nearly all of modern cryptography would become irrelevant, for instance.