Comment author: Alexandros 17 January 2016 01:48:28AM 3 points [-]

Reminds me of the motto "Strong Opinions, Weakly Held". There's no point having a blurry opinion, or not expressing what you believe to be the most likely candidate for a good way forward, even if it's more likely by only a small margin. By expressing (and/or acting on) a clearly expressed, falsifiable opinion, you expose it to criticism, refutation, improvement, etc. And if you hold it weakly, then you will be open to reconsidering. Refusing to make up your mind, and kindof oscilating between a few options, perhaps waiting to see where the wind blows, has its advantages, but especially when it comes to getitng things done, is most often a clear loser. Despite this, our brains seem to prefer it instinctively, maybe due to some ancestral environment echoes about being proven wrong in the eyes of the tribe?

Comment author: passive_fist 30 November 2015 08:13:39PM *  0 points [-]

The route to AI that you're suggesting is a plausible one; people like Nick Bostrom have talked about scenarios like this at length. Scenarios where we gradually shift our 'computational substrate' to non-biological hardware over several generations. But that's not necessarily what uploading is! As I mentioned, uploading is the transferring of a consciousness from some specific piece of hardware to another piece of hardware. The title and wording of your post implies that you are talking about uploading, but our discussion indicates you are actually talking about building an AI, which is an entirely different concept, and everyone who is confused about this distinction would do well to clearly understand it before talking about it.

Comment author: Alexandros 30 November 2015 11:46:12PM 0 points [-]

You appear to be arguing about definitions. I'm not interested in going down that rabbit hole.

Comment author: passive_fist 30 November 2015 05:48:47AM 0 points [-]

The whole idea of uploading concerns human consciousness. Specifically, transferring a human consciousness to a non-biological context. If you're not talking about human consciousness, then you're just talking about building an AI.

Comment author: Alexandros 30 November 2015 05:55:54AM 1 point [-]

Which in turn depends on what you mean by "artificial".

Comment author: passive_fist 30 November 2015 03:48:21AM *  0 points [-]

There are still many intermediate steps. What does it mean "to be conscious of a sensory input"? Are we talking system 1 or system 2?

The system 1/system 2 distinction is only tangentially related here.

If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not?

It's irrelevant whether the brain is 'composed of modules' or not. If what you mean is whether it is possible for consciousness to be distributed, well that's a good question. If it's possible for consciousness to be distributed then you could imagine being 'spread out' over a very large computer network (possibly many light-years in length). But the situation becomes tricky because if, say, your 'leg' was in one star system and your 'eye' was in another star system, stimulus from your eye could not cause a reaction from your leg in time shorter than several years, otherwise you violate the speed of light limit and causality. So either you cannot be 'spread out', or your perception of time slows down so extremely that several years seems instantaneous (just like the fraction of a second required for you to move your human leg seems instantaneous now).

Comment author: Alexandros 30 November 2015 05:35:45AM 1 point [-]

I don't use the word consciousness as it's a complex concept not really necessary in this context. I approach a mind as an information processing system, and information processing systems can most certainly be distributed. What that means for consciousness depends on what you mean by consciousness I suppose, but I would not like to start that conversation.

Comment author: passive_fist 30 November 2015 03:28:58AM 0 points [-]

what does it mean to "travel" when you can receive sensory inputs from any point in the network?

To be able to shorten the time which it takes to be conscious of a sensory input. If the sensor is at point A and you are at distance x from that sensor, you require at least x/c time to be aware of an input from that sensor.

The whole point of travel is to have low-latency, high-bandwidth access to information that exists at some point in the universe.

but that each step is small enough to make sense for a non-adventurous person

It still seems to me that the step from 'being tied to a specific piece of hardware' - whether that hardware is an entirely biological brain or an enhanced biological brain - to being pure information capable of moving from hardware to hardware is a pretty big step, regardless of how it is performed. It's the very essence of digitizing something. A physical book is information tied to hardware; uploading consists of scanning the book.

Comment author: Alexandros 30 November 2015 03:40:23AM 1 point [-]

There are still many intermediate steps. What does it mean "to be conscious of a sensory input"? Are we talking system 1 or system 2? If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not? What if the biological part's responses can be modelled well enough to be predicted digitally 99.9% of the time, such that a remote near-copy can be almost autonomous by means of optimistic concurrency, correcting course only when the verdict comes back different than predicted. The notion of the brain as a single indivisible unit that "is aware of an input" quickly fades away when the possibilities of software are taken into account, even when only part of you is digital.

Comment author: passive_fist 30 November 2015 12:21:06AM *  0 points [-]

It seems like the problem of "the disruptive nature of the process from biological to digital being" still exists, except in your scenario it's only being pushed farther down the road.

You could imagine a radical outside-in modification of the human body, going from people with only their limbs replaced, to others with everything but their brain replaced, to others still with most of their brain except their cerebral cortex replaced, and even their cerebral cortex vastly modified and extended. But the seat of consciousness would still be their brain, and a person in such a state would not be considered an upload. They are not 50% uploaded or 5% uploaded or even 1% uploaded. They are zero percent uploaded, because their consciousness is still tied to some particular piece of hardware, exactly the way it was before. A true upload would not have their consciousness tied to any particular hardware - they would exist as information and could travel around at will.

A good 'test' for uploading might be the speed-of-light test. If your consciousness is capable of travelling at the speed of light, you are uploaded. Otherwise you aren't uploaded. The great thing about this test is that nature provides a very clear boundary between pure information and matter. Matter can contain information but is necessarily always travels slower than light, whereas pure information can and does travel at the speed of light.

Sure, it might be possible to have a 'half-uploaded' state where some of your consciousness is still tied to some hardware and some of it can move freely. But we don't yet know if such a state is possible. Actually, we don't even know if uploading is possible. It could be that your consciousness is always doomed to being tied to some hardware and it could never move at the speed of light.

Comment author: Alexandros 30 November 2015 03:20:04AM 1 point [-]

Surely the point at which your entire sensory input comes from the digital world you are somewhat uploaded, even if part of the processing happens in biological components. what does it mean to "travel" when you can receive sensory inputs from any point in the network? There are several rubicons to be crossed, and transitioning from "has tiny biological part" to "has no biological part" is another, but it's definitely smaller than "one day an ape, the next day software". What's more, what I'm arguing is not that there aren't disruptive steps, but that each step is small enough to make sense for a non-adventurous person, as a step increase in convenience. It's the theseus ship of mind uploading.

Comment author: Alexandros 12 April 2015 09:34:09AM 8 points [-]

This whole conversation sounds to me like people arguing whether width or height is a more important factor to the area of a rectangle. Or perhaps what percentage of the total each is responsible for.

It seems we humans are desperate to associate everything with a single cause, or if it has multiple causes, allocate causality to x% of multiple factors. However, success quite often has multiple contributing factors and exhibits "the chain is as strong as its weakest link" type behaviour. When phrased in terms of the contribution width and height make to the area of a rectangle, a lot of the conversation sounds like a category error. A lot of the metaphors we try and apply quite simply do not make sense.

Comment author: Alexandros 13 January 2015 06:22:01PM 4 points [-]

The truly insidious effects are when the content of the stories changes the reward but not by going through the standard quality-evaluation function.

For instance, maybe the AI figures out that the order of the stories affects the rewards. Or perhaps it finds how stories that create a climate of joy/fear on campus lead to overall higher/lower evaluations for that period. Then the AI may be motivated to "take a hit" to push through some fear mongering so as to raise its evaluations for the following period. Perhaps it finds that causing strife in the student union, or perhaps causing racial conflict, or causing trouble with the university faculty affects its rewards one way or another. Perhaps if it's unhappy with a certain editor, it can slip through bad enough errors to get the editor fired, hopefully replaced with a more rewarding editor.

etc etc.

Comment author: Alexandros 08 October 2014 03:32:48AM 12 points [-]
Comment author: adamzerner 15 September 2014 05:21:14AM *  0 points [-]

Thanks for the encouragement! Would you mind offering your opinion on a few things though?

  1. How many people would a complete overhaul take, and how long would it take (roughly)?
  2. Why are the site owners reluctant to change?
  3. What do you think of my rough cost-benefit argument? The things I said are my intuition, but I could easily be overlooking certain things, and I don't understand it well enough to be too confident in the intuition. So what do you think? (you seem to share the belief in the value of the benefits, but don't seem to think they outweigh the costs)

Also, I don't want to get anyone's hopes up about my contributions. I'm still learning to code and I don't know how good I'll be in 13 weeks when I finish my bootcamp and I can't tell how long it'll be before I'm capable enough to contribute to something like this.

Comment author: Alexandros 16 September 2014 09:25:55PM 0 points [-]
  1. I don't know, I haven't done the effort estimation. It just looks like more than I'd be willing to put in.
  2. One hypothesis is that LessWrong.com is a low priority item to them, but they like having it around, so they are averse to putting in the required amount of thought to evaluate a change, and inclined to leave things as they are.
  3. I think it is unlikely it will have as much benefit as you expect, and that the pain will be bigger than you expect. However, if you add the fact that your drive may help you learn to program, then the ROI tips the other way massively.

By the way, an alternative explanation for the fact that so many developers are here but so few (or none) actually contribute to LW code, is that they're busy making lots of money or working on other things they find exciting. This is good news for you, because making the changes may be easier than I originally estimated. As long as you are determined enough.

View more: Next