Comment author: Kyre 12 August 2016 04:51:24AM 0 points [-]

Will second "Good and Real" as worth reading (haven't read any of the others).

Comment author: rmoehn 19 July 2016 02:14:59AM 1 point [-]

Not much going on as far as I know. What I know is the following:

  • Naozumi Mitani has taught a course on Bostrom's Superintelligence and is »broadly pursuing the possible influence of AI on the future lives of humanity«. He's an associate professor of philosophy at Shinshu University (in Nagano).
  • The Center for Applied Philosophy and Ethics at Kyoto University is also somehow interested in AI impacts.
  • My supervisor is gradually getting interested, too. This is partly my influence, but also his own reading. For example, he found the Safely Interruptible Agents and Concrete Problems in AI Safety independently of me through Japanese websites. He's giving me chances to make presentations about AI safety for my fellow students and hopefully also for other professors.

Other than that I know of nobody and searching the web quickly, I didn't find out more. One problem here is that most students don't understand much English, so most of the AI safety literature is lost on them. The professors do know English, but I maybe they're usually not inclined or able to change their research focus.

It's a good sign that my supervisor finds AI safety articles through Japanese websites, though.

Comment author: Kyre 19 July 2016 04:01:21AM 2 points [-]

Maybe translating AI safety literature into Japanese would be a high-value use of your time ?

Comment author: ChristianKl 29 June 2016 07:36:00PM *  1 point [-]

Waiting long enough has yielded evidence of absence of risk.

Just like the turkey had a lot of evidence for humans being nice to him until the day before thanksgiving.

GMO has been around for ~10-20 years now.

By the standard that 20 years with a new technology should be enough to see problems with it various techonologies from lead pipes, to cigaretts to asbest, were also proven to be safe.

Without labeling of products it's also difficult to actually gather the information. I think it's a bad general argument to say that people shouldn't know whether they are ingesting X because X isn't proven to do anything yet.

Comment author: Kyre 30 June 2016 05:06:42AM 3 points [-]

That's true, 20 years wouldn't necessarily bring to light a delayed effect.

However the GMO case is interesting because we have in effect a massive scale natural experiment, where hundreds of millions of people on one continent have eaten lots of GMO food while hundreds of millions on another continent have eaten very little, over a period of 10-15 years. There is also a highly motivated group of people who bring to the public attention even the smallest evidence of harm from GMOs.

While I don't rule out a harmful long-term effect, GMOs are a long way down on my list of things to worry about, and dropping further over time.

Comment author: woodchopper 24 April 2016 05:38:30PM 0 points [-]

Can you elaborate on the concept of a connection through "moment-to-moment identity"? Would for example "mind uploading" break such a thing?

Comment author: Kyre 26 April 2016 05:42:13AM *  0 points [-]

Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig's Lemma applied :-)

If I were to try to make a more serious argument it would go something like this.

Defining identity, whether two entities are 'the same person' is hard. People have different intuitions. But most people would say that 'your mind now' and 'your mind a few moments later' are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than 'observer moments') with outgoing edges leading to mind states a few moments later.

That is kind of what I meant by "moment-by-moment" identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven't thought about whether its a good one.

In the ordinary course of events these graphs aren't very interesting, they're just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.

So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.

Also, you could have two mind states that lead to the same successor mind state - for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.

(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier - trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you'd need very carefully apply selective amnesia to achieve it.)

So, if we say "immortality" is having an identity graph with an infinite number of mind-states all connected through the "moment-by-moment identity" relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore "eternal existence in linear time".

Rather contrived, I know.

Comment author: Kyre 20 April 2016 04:46:08AM *  1 point [-]

If we take "immortality" to mean "infinitely many distinct observer moments that are connect to me through moment-to-moment identity", then yes, by Konig's Lemma.

(Every infinite graph with finite-degree verticies has an infinite path)

(edit: hmmm, does many-worlds give you infinite-branching into distinct observer moments ?)

Comment author: Kyre 19 February 2016 05:09:33AM 5 points [-]

Procedural universes seemed to see a real resurgence from around 2014, with e.g. Elite Dangerous, No Man's Sky, and a quite a few others that have popped up since.

I love a beautiful procedural world, but I think things will get more interesting when games appear with procedural plot structures that are cohesive and reactive.

Then multiplayer versions will appear that weave all player actions into the plot, and those games will suck people in and never let go.

Comment author: gjm 07 January 2016 01:44:12PM 9 points [-]

You still have experiences while you are asleep

During some periods of sleep. So far as I am aware, in deep sleep there's no reason to think you are having any experiences at all.

Anyway, for those who don't object to thought experiments: imagine that there's some machine that completely suspends all your brain activity for five minutes, after which it continues from exactly its previous state. Are you the same person after as before? If you answer yes to this -- which I bet almost everyone does -- then the implications are the same as those you'd get from sleep involving a complete cessation of consciousness.

Comment author: Kyre 12 January 2016 07:47:48AM 0 points [-]

For 5 minutes suspension versus dreamless deep sleep - almost exactly the same person. For 3 hours dreamless deep sleep I'm not so sure. I think my brain does something to change state while I'm deep asleep, even if I don't consciously experience or remember anything. Have you ever woken up feeling different about something, or with a solution to a problem you were thinking about as you dropped off ? If that's not all due to dreaming, then you must be evolving at least slightly while completely unconscious.

Comment author: Kyre 07 January 2016 05:09:29PM 2 points [-]

Would a slow cell by cell, or thought by thought / byte by byte, transfer of my mind to another medium: one at a time every new neural action potential is received by a parallel processing medium which takes over? I want to say the resulting transfer would be the same consciousness as is typing this but then what if the same slow process were done to make a copy and not a transfer? Once a consciousness is virtual, is every transfer from one medium or location to another not essentially a copy and therefore representing a death of the originating version?

I would follow this line of questioning. For example, say someone does an incremental copy process to you, but the consciousness generated does not know whether or not the original biological consciousness has been destroyed, and has to choose which one to keep. If it chooses the biological one and the biology has been destroyed, bad luck you are definitely gone. What does your consciousness, running either just on the silicon, or identically on the silicon and in the biology, choose ?

Let's say you are informed that there is 1% chance that the biological version has been destroyed. Well, you're almost certainly fine then, you keep the biological version, the silicon version is destroyed, and you live happily ever after until you become senile and die.

On the other hand, say you are informed that the biological version has definitely been destroyed. On your current theory, this means that that the consciousness realises that it has been mistaken about its identity, and is actually only a few minutes old. It's sad that the progenitor person is gone, but it is not suicidal, so it chooses the silicon version.

At what point on the 1% to 100% slider would your consciousness choose the silicon version ?

(Hearing the though-experiment of incremental transfer (or alternatively duplication) was one of the things that changed my mind to pattern-identity from some sort of continuity-identity theory. I remember hearing an interview with Marvin Minsky where he described an incremental transfer on a radio program).

Comment author: Kyre 14 December 2015 05:06:16AM *  0 points [-]

Not sure if it's a scientific or engineering achievement, but this Nature letter stuck in my mind:

An aqueous, polymer-based redox-flow battery using non-corrosive, safe, and low-cost materials

Comment author: Tem42 06 December 2015 04:18:29AM 1 point [-]

That would buy you some time.

My thought was that if a simulation that centered around a single individual had a simulation running within it, the simulation would only need to be convincing enough to appear real to that one person. Even if the nested simulation runs a third level simulation within it, or if the one individual runs two simulations, aren't you still basically exploring the idea space of that one individual? That is, me running a simulation and experiencing it through virtual reality is limited in cognitive/sensory scope and fidelity to the qualia that I can experience and the mental processes that I can cope with... which may still be very impressive from my point of view, but the computational power required to present the simulation can't be much more complex than the computational power required to render my brain states in the base simulation. I may simulate a universe with very different rules, but these rules are by definition consistent with a full rendering of my concept space; I may experience new sensory inputs (if I use VR), but I won't be experiencing new senses.... and what I experience through VR replaces, rather than adds to, what I would have experienced in the base simulation.

Even in the worst case scenario that I build 1000+ simulations, they only have to run for the time that I check on them. The more time I spend programming them and checking that they are rendering what they should, the less time I have to do additional simulations. This seems at worst an arithmetic progression.

Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory, and the fact that if it worked, I would never know, I'm not sure that that is a significant risk.

Comment author: Kyre 07 December 2015 05:36:51AM 1 point [-]

Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.

Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can't tell if they've taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.

If you are running simulations to answer psychological questions, that should work. And if you are running simulations to answer physics questions ... why would you fill them with conscious people ?

Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory

I was going to say that if you want to be a pain you could launch some NP hard problems that you can manually verify solutions to with a pencil and paper ... except your simulators control your random-number generators.

View more: Next