Comment author: Juno_Watt 24 August 2013 06:49:51PM *  4 points [-]

This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.

Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?

It might be the case that an adequate replacement -- not a full replacment -- could be non-biological. But it might not.

Comment author: Furslid 24 August 2013 09:42:40PM 1 point [-]

It's a thought experiment. It's not meant to be a practical path to artificial consciousness or even brain emulation. It's a conceptually possible scenario that raises interesting questions.

Comment author: ChrisHallquist 24 August 2013 09:08:43PM *  5 points [-]

Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition.

Note that I specifically said in the OP that I'm not much concerned about the biological view being right, but about some third possibility nobody's thought about yet.

Anyone who would hold the biological view should answer the questions in this though experiment.

A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject's whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn't. If it doesn't, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn't. If it isn't then how was consciousness lost and at what point in the process?

This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person's state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.

Comment author: Furslid 24 August 2013 09:37:18PM *  1 point [-]

That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn't easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it's a little strange.

Also, it means that consciousness isn't a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.

Comment author: Furslid 24 August 2013 06:22:02PM *  3 points [-]

Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.

A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject's whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn't. If it doesn't, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn't. If it isn't then how was consciousness lost and at what point in the process?

Comment author: Furslid 26 July 2013 11:57:14PM 5 points [-]

Why are we talking about jobs rather than man-hours worked? Automation reduced man-hours worked. We went from much longer work weeks to 40 hour work weeks as well as raising standards of living.

AI will reduce work time further. If someone can use AI to produce as much in 30 hours as they did in 40, they could chose to work anywhere from 30 - 40 hours and be better off. Many people would chose to work less as they compare the marginal values of free time and extra pay.

Why are we seeing long term unemployment instead of shorter work weeks now? Is this inevitable or is there some structural or institutional problem causing it?

Comment author: Stuart_Armstrong 08 July 2013 07:21:37AM 2 points [-]

Contrast fishing (an international "commons") with forestry (a series of national commons). Many countries have successful forestry programs that preserve quite decently; but the devastation caused by overfishing is extreme. These two industries are not fundamentally different, but the fact that one requires international cooperation and the other doesn't seem to make all the difference. You could also glance at the successful international pollution reductions (eg CFCs and acid rain). A singleton should be able to do better than the painstakingly negotiated treaties of today!

On specifics, the WHO seems to have a pretty decent track record on pandemics (not nearly as good as it should me, much better than it could be). I'm not all that knowledgeable on the various rules governing fissile materials, but they seem to be working acceptably, in any place that they can be enforced. And, of course, regulations of synthetic biology and AI are essentially impossible without a singleton or extreme global coordination.

Comment author: Furslid 08 July 2013 07:25:03PM *  3 points [-]

I don't think that's the relevant difference between forestry and fishing. Forestry can be easily parceled out by plot in a way that fishing can't. Forests can be managed by giving one logging concern responsibility for a specific plot and holding them responsible for any overlogging in that area and for any mandated replanting.

Fishing has to be managed by enforcing quotas, this is a much more difficult problem even for a single government. I haven't done research in fishing, but do we see fishing being managed well in areas that are under the jurisdiction of one government or governments with good cooperation (like the great lakes)? Or for species that's habitat is within the coastal waters of one government?

Comment author: Furslid 06 July 2013 08:29:19PM 4 points [-]

Why is it legitimate to assume that a singleton would be effective at solving existential risks? A one world government would have all the same internal problems as current governments. The only problems that scaling up would automatically eliminate are those of conflicts between different states, and these would likely be transformed into conflicts between interest groups in one state. This is not a reduction to a solved problem.

There are wars of secession and revolution now. There are also violent conflicts among ethnic and religious groups within one state. There is terrorism. Why would a one world government ruling over a more diverse populace than any current government not have these problems? People won't automatically accept the singleton any more than they accept the current governments.

Even with unified powers, governments regularly mismanage crises. Current governments (even democratic first world governments) have problems dealing with such things as predictable weather and earthquakes along known fault lines. Why would a one world government be better able to handle much less predictable crises, like a pandemic?

In response to comment by Furslid on Fermi Estimates
Comment author: Qiaochu_Yuan 11 April 2013 07:44:15PM *  5 points [-]

I think you're conflating "rationalist" and "intellectual." I agree that there is a stereotype that intellectuals only listen to Great Works like Bach or Mozart, but I'm curious where the OP picked up that this stereotype also ought to apply to LW-style rationalists. I mean, Eliezer takes pains in the Sequences to make anime references specifically to avoid this kind of thing.

Comment author: Furslid 13 April 2013 10:15:54PM 1 point [-]

I'm just pointing out the way such a bias comes into being. I know I don't listen to classical, and although I'd expect a slightly higher proportion here than in the general population, I wouldn't guess it wold be a majority or significant plurality.

If I had to guess, I'd guess on varied musical tastes, probably trending towards more niche genres than broad spectrum pop than the general population.

Comment author: Qiaochu_Yuan 09 April 2013 05:17:00AM 2 points [-]

I thought rationalists just sort of listened only to Great Works like Bach or Mozart

Why?

Comment author: Furslid 11 April 2013 06:24:50PM 2 points [-]

Because of the images of different musical genres in our culture. There is an association of classical music and being academic or upper class. In popular media, liking classical music is a cheap signal for these character types. This naturally triggers confirmation biases, as we view the rationalist listening to Bach as typical, and the rationalist listening to The Rolling Stones as atypical. People also use musical preference to signal what type of person they are. If someone wants to be seen as a rationalist, they often mention their love of Bach and don't mention genres with a different image, except to disparage them.

In response to Fermi Estimates
Comment author: lukeprog 06 April 2013 07:53:14PM 3 points [-]

Write down your own Fermi estimation attempts here. One Fermi estimate per comment, please!

In response to comment by lukeprog on Fermi Estimates
Comment author: Furslid 06 April 2013 11:26:29PM 0 points [-]

Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?

In response to Fermi Estimates
Comment author: Furslid 06 April 2013 11:24:44PM 0 points [-]

Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?

View more: Prev | Next