All of SteveG's Comments + Replies

SteveG00

Would uploads avoid self improvement? If we are going to try to address this question, we should first consider the plausibility and importance of the whole upload concept.

Given the power and relatively young age of some Silicon Valley executives who seem to see uploading as part of their future, we might want to check to see whether the pursuit of uploading would have any side-effects.

If we believe that uploads are malleable and improvable, then the technology to create uploads would also permit the creation of more powerful minds, with all the consequences.

SteveG00

Uploads and those creating a WBE-like entity as progeny most likely would prefer to add improvements to a greater or lesser extent, rather than complete fidelity.

Some people may argue that WBEs should lead as natural an existence as possible, one very much like people.

On the assumption that these people value their uploads or progeny, however, some aspects of life experience would be edited out. For example, what would motivate one of these creators to pass their WBEs through an unpleasant end-of-life experience, like vascular dementia?

The emulated lives of uploads and progeny would, to a greater or lesser extent, be edited. We could try to reason more about that.

0SteveG
Would uploads avoid self improvement? If we are going to try to address this question, we should first consider the plausibility and importance of the whole upload concept. Given the power and relatively young age of some Silicon Valley executives who seem to see uploading as part of their future, we might want to check to see whether the pursuit of uploading would have any side-effects. If we believe that uploads are malleable and improvable, then the technology to create uploads would also permit the creation of more powerful minds, with all the consequences.
SteveG00

Suppose that a emulations will be created to study how the brains of flesh-and-blood people work in general, or to study and forecast how a particular, living person will react to stimulus.

This is a reasonable application of high-fidelity whole-brain emulation. To use such emulations to forecast behavior, though, the emulation would have to be "run" on a multi-dimensional distribution of possible future sets of environmental stimuli. The variation in these distributions grows combinatorially, so even tens of thousands of runs would only provide... (read more)

SteveG20

There is a tremendous amount of good material in here, thanks...

The thing that I would like to see added is a perspective on how changeable, or malleable, WBEs would be, once created.

One of the main reasons I am challenging WBEs is because I think that brain emulations would be very easy to change, alter and improve along the axes of defined performance metrics.

If they will be highly malleable, those who wish to use them to generate productivity would instead use improved (neuromorphic) versions. Additionally, a WBE which had some control of its own make-... (read more)

SteveG00

For the next few years and possibly decades, the development of brain emulation technology will occur alongside the development of neuromorphic technology.

Some teams will be primarily focused on achieving extremely accurate renditions of sections of actual brain tissue, as well as increasingly accurate neural maps which are sometimes based on high-throughput scans of actual brain tissue. These teams will wish to based their work on individual neurons and glia that are very much like actual cells.

However, Henry Markram, director of Europe's Human Brain Ini... (read more)

SteveG00

Engineers attempting to improve either a WBE or a piece of neuromorphic tissue would have considerable advantages that are unavailable to medical teams working with actual brains and nerves.

Medical teams who work to repair spinal injuries are able to stimulate nerve fibers and trace the nerves into the brain. However, a vast set of experimental tools would be available to WBE or Neuromorphic Engineers.

These engineers would be able to write program which cause any specific neuron or group of neurons to fire at any time. They would be able to select the fi... (read more)

SteveG00

Apparently, an advantage of creating a thread with a controversial and heterodox first entry is that for a time you get to write all of it yourself! :) That's OK, because I have a fair amount of brain dumping to do on this subject.

SteveG00

Brain grafts are a very difficult idea in actual brain tissue today.

One of the key reasons, however, will begin to become a non-factor: tissue rejection. We can now grow neurons that have the same genetic code as yours or mine in the lab (I actually did this.) A method is to turn induced pluripotent stem cells (iPSCs) which may have been created from your own skin, into nerve cells.

I grew a small plate of such cells. I did not try to distinguish which among them were neurons, and which were glia. I am sure how far along we are toward growing a complete... (read more)

SteveG00

The future of AI will come out very differently if sections of neural tissue cannot be made to function usefully, separately from from a WBE.

Similarly, the future of AI will come out very differently if removing parts of the brain from an emulation causes the brain to become non-functional.

We know from studies of stroke and other forms of brain damage that brain function does not immediately degrade if a small section of brain is injured. Therefore, removing sections from a WBE might reduce the functionality of the WBE, but would not diminish it entirely.... (read more)

0SteveG
Brain grafts are a very difficult idea in actual brain tissue today. One of the key reasons, however, will begin to become a non-factor: tissue rejection. We can now grow neurons that have the same genetic code as yours or mine in the lab (I actually did this.) A method is to turn induced pluripotent stem cells (iPSCs) which may have been created from your own skin, into nerve cells. I grew a small plate of such cells. I did not try to distinguish which among them were neurons, and which were glia. I am sure how far along we are toward growing a complete neural column, or a section of brain. Assuming the neurons were grown, however, installation would still be very difficult. I am not willing to say impossible, but we have some challenges. The balance and configuration of the glia would be difficult to control. Blood flow through both large vessels and capillaries would have to be restored to the added section. Another issue importantly, perhaps, is that neurons in the brain have long axons. The "white matter" of the brain contains portions of these axons that string from brain region to brain region. It is a tangled net. Replacement neurons might have to be literally "woven in" to this net. Advantageously, the axons that are already there are sometimes bundled, but they are stuck together. It is not like stripping a large wire and seeing many filaments pop out. Physically "weaving" new neurons into the brain is a lot more challenging than weaving them into a WBE or a piece of neuromorphic tissue. At any given point in the early history of neuromorphic engineering, there will be a greater or lesser understanding of the relationship between structure and function. However, using WBEs and neuromorphic tissue in experiments to try to elicit function from structure will be very inexpensive. Tens of thousands or billions of experiments could be run with a single set of macros. For this reason, I forecast, with considerable but not complete certainty, that the exi
SteveG00

If we are able to conclude that alteration or removal of part of the WBE would be desirable for the purposes of the emulation's controllers, then we should conclude that WBE technology in a sense flows into neuromorphic technology, and is not separate from it in a fundamental way.

SteveG00

Even without extending the definition of neuromorphic, a WBE with a high-speed link to algorithms is clearly neuromorphic once significant portions of the neural simulation components are altered or removed.

0SteveG
If we are able to conclude that alteration or removal of part of the WBE would be desirable for the purposes of the emulation's controllers, then we should conclude that WBE technology in a sense flows into neuromorphic technology, and is not separate from it in a fundamental way.
SteveG00

A human WBE could have a very high-speed link, either with conventional computers running algorithms which the WBE triggers regularly, or with other WBEs.

If these links were sufficiently fast and robust, then we would do best to analyze the cognitive capacity of the system of the WBE and the links taken together, rather than thinking of them as separate units.

At a certain point, linking a WBE to many other software tools creates an enhanced system which is very different from a human mind. Whether we call the combined system neuromorphic or just highly en... (read more)

0SteveG
Even without extending the definition of neuromorphic, a WBE with a high-speed link to algorithms is clearly neuromorphic once significant portions of the neural simulation components are altered or removed.
SteveG00

Today, people are able to input data into calculating machines through speech and gestures, including drawing and typing.

Additionally, machines can gather biomarker data produced by the person. We can also issue a simple command to transmit a large block of previously prepared data.

These input mechanisms have certain potential disadvantages:

-They are somewhat inaccurate -They are slow. (Although triggering a larger file transmission makes up for the speed deficit, under many circumstances.)

We can receive more information through sensory input than we can transmit out.

SteveG00

Additionally, at what point does such a combination cease to be more like a human mind-computer interface and instead require re-classification as a neuromorphic or otherwise novel entity?

0SteveG
A human WBE could have a very high-speed link, either with conventional computers running algorithms which the WBE triggers regularly, or with other WBEs. If these links were sufficiently fast and robust, then we would do best to analyze the cognitive capacity of the system of the WBE and the links taken together, rather than thinking of them as separate units. At a certain point, linking a WBE to many other software tools creates an enhanced system which is very different from a human mind. Whether we call the combined system neuromorphic or just highly enhanced is a question of definitions. However, the combined system could develop to the point where it is very different than an ordinary person or team of people who can call on a powerful computer to calculate a result.
0SteveG
Today, people are able to input data into calculating machines through speech and gestures, including drawing and typing. Additionally, machines can gather biomarker data produced by the person. We can also issue a simple command to transmit a large block of previously prepared data. These input mechanisms have certain potential disadvantages: -They are somewhat inaccurate -They are slow. (Although triggering a larger file transmission makes up for the speed deficit, under many circumstances.) We can receive more information through sensory input than we can transmit out.
SteveG00

Unless, prior to the emergence of neuromorphic AI, forms of AI that do not include neurologically-inspired elements become more dominant.

SteveG00

So, if we can establish that progress in emulation technology will quickly result in functional, malleable products, then for the most part future productivity will be generated by purpose-built neuromorphic computing resources rather than by human-like WBEs.

0SteveG
Unless, prior to the emergence of neuromorphic AI, forms of AI that do not include neurologically-inspired elements become more dominant.
SteveG00

These characteristics are more available to "The Boss" if "The Boss" considerably alters a malleable emulation.

Such an altered emulation is now neuromorphic.

Thus: if one or more "Bosses" is constructing a workforce, these "Bosses" will prefer neuromorphic components over whole-brain emulations.

Thus, if emulations are sufficiently malleable, there is no economy of whole-brain emulations: There is an economy of neuromorphic computing resources.

0SteveG
So, if we can establish that progress in emulation technology will quickly result in functional, malleable products, then for the most part future productivity will be generated by purpose-built neuromorphic computing resources rather than by human-like WBEs.
SteveG00

If the technology is available, "The Boss" will prefer that its work force have high-speed connections to other computing resources. "The Boss" will also prefer that its work force have high-speed connections to whatever sensory input is relevant to the task.

SteveG00

"The Boss" can get more done if it can create new workers, and turn them on and off at will, without ethical or regulatory constraints.

If the technology is available,"The Boss" will prefer to employ cogntive capacity which has no personhood, and to which it has no ethical obligation.

SteveG00

If the emulation is controlled by "The Boss," what incentives does "The Boss" have?

-to increase the emulation's throughput and efficiency -to increase the emulation's focus on the task that generates value -to avoid activities by regulators, protesters or other outsiders which could cause work stoppages.

0SteveG
These characteristics are more available to "The Boss" if "The Boss" considerably alters a malleable emulation. Such an altered emulation is now neuromorphic. Thus: if one or more "Bosses" is constructing a workforce, these "Bosses" will prefer neuromorphic components over whole-brain emulations. Thus, if emulations are sufficiently malleable, there is no economy of whole-brain emulations: There is an economy of neuromorphic computing resources.
0SteveG
If the technology is available, "The Boss" will prefer that its work force have high-speed connections to other computing resources. "The Boss" will also prefer that its work force have high-speed connections to whatever sensory input is relevant to the task.
0SteveG
"The Boss" can get more done if it can create new workers, and turn them on and off at will, without ethical or regulatory constraints. If the technology is available,"The Boss" will prefer to employ cogntive capacity which has no personhood, and to which it has no ethical obligation.
SteveG00

AABoyles also begins to address another important and much-discussed question:

Can the emulation interface with:

Sensory inputs unavailable to the human brain?

Reasoning, calculation, memory modules and other minds in more direct ways?

Rather than inputting data into computers and observing the outputs of computers and sensory devices as we do today.

0SteveG
Additionally, at what point does such a combination cease to be more like a human mind-computer interface and instead require re-classification as a neuromorphic or otherwise novel entity?
SteveG00

Just laying some more groundwork... One distinction the discussion requires:

Who is in control of the components and the environment of the emulation?

Possibilities:

An outside entity, attempting to gain economic or other value by using the emulation to complete information processing tasks. (I'll call this "The Boss.")

-The environment was established to maintain the emulation, which is not "given a job," but was created for scientific observation by outsiders.

-The emulation is not given a job, but environment was created by outsiders as... (read more)

0SteveG
Uploads and those creating a WBE-like entity as progeny most likely would prefer to add improvements to a greater or lesser extent, rather than complete fidelity. Some people may argue that WBEs should lead as natural an existence as possible, one very much like people. On the assumption that these people value their uploads or progeny, however, some aspects of life experience would be edited out. For example, what would motivate one of these creators to pass their WBEs through an unpleasant end-of-life experience, like vascular dementia? The emulated lives of uploads and progeny would, to a greater or lesser extent, be edited. We could try to reason more about that.
0SteveG
Suppose that a emulations will be created to study how the brains of flesh-and-blood people work in general, or to study and forecast how a particular, living person will react to stimulus. This is a reasonable application of high-fidelity whole-brain emulation. To use such emulations to forecast behavior, though, the emulation would have to be "run" on a multi-dimensional distribution of possible future sets of environmental stimuli. The variation in these distributions grows combinatorially, so even tens of thousands of runs would only provide some information about what the person is likely to do next. Such WBEs would be only one tool in a toolbox to predict human behavior. However, they would be useful for that purpose. Your WBE could be fed many possible future lives, allowing you to make better choices about your future in the physical world, if using WBEs in that manner was considered ethical. People on this site generally seem to agree, though, that using a high-fidelity WBE as a guinea pig to test out life scenarios is ethically problematic. If these life scenarios were biased in favor of delivering positive outcomes to the WBEs, maybe we would not have as much of a problem with that. Perhaps the interaction of two WBEs could be observed over many scenarios, allowing people to better choose companions. WBEs could end up being used for this purpose, ethical or not. Again, though, I suspect that more data about people's reactions could be gained if modified WBEs were used in some of the tests. It's worth exploring, but high-performance neuromorphic or algorithmic minds would still be the better choice for actually controlling physical conditions.
0SteveG
If the emulation is controlled by "The Boss," what incentives does "The Boss" have? -to increase the emulation's throughput and efficiency -to increase the emulation's focus on the task that generates value -to avoid activities by regulators, protesters or other outsiders which could cause work stoppages.
SteveG00

Obviously, replacing neural components with others could create an emulation which diverges from the human mind, becoming more and more neuromorphic.

SteveG00

Seemingly, controlling sensory inputs, and controlling the blood supply would permit a vast degree of control over the activity of the brain.

SteveG00

Another aspect of malleability: How much can the structure and activity of the brain be influenced using means that we presently considered external or environmental? These influences would include sensory inputs and inputs through the blood.

0SteveG
Seemingly, controlling sensory inputs, and controlling the blood supply would permit a vast degree of control over the activity of the brain.
SteveG00

One aspect of malleability: At a specific point in the forecast timeline, how easy or difficult is it to create an emulation with a replacement subsystem or component that is functional, but functions differently? Does the emulation continue to work if these sub-systems are replaced or altered to a lesser or greater degree?

0SteveG
Obviously, replacing neural components with others could create an emulation which diverges from the human mind, becoming more and more neuromorphic.
SteveG00

One aspect of what I call "fidelity" is the degree to which the emulation incorporates various aspects of neurophysiology.

For example, the emulation might or might not incorporate:

-Good models of fluid flow within the brain, and between the brain, the blood and the cerebrospinal fluid.

-Good models of the components of the blood itself, and how these components would influence brain activity.

SteveG00

Future timelines could assume that a great deal of additional knowledge about structure and function of components of the brain is developed before functional WBEs are developed.

Or, perhaps scanning technology improves rapidly, allowing for higher and higher levels of fidelity, but our knowledge of how the brain actually works does not advance as rapidly.

SteveG00

We could imagine a timeline where extremely high fidelity emulations that function are created without functional, low-fidelity emulations being created first.

Or, we could imagine a stepwise process where lower-fidelity emulations that function are created first, then these are "improved" to the point that they represent the workings of the human mind more and more accurately.

SteveG00

Another distinction:

Many people have worked to reason about the level of "fidelity" of WBEs. That is to say, how near is a WBE to being and accurate representation of a human brain-what does it leave in, and what does it leave out.

SteveG00

In order to analyze the future of brain emulations further, I want to begin to add some distinctions:

The level of "malleability" of an emulation represents the degree to which its progress through time can be influenced by specific attempts to change it or control its environment.

The precision of this distinction needs to be increased, and people can comment about that here.

SteveG00
[This comment is no longer endorsed by its author]Reply
SteveG00

I wish to see whether we can show that human whole-brain emulations wll be essentially neuromorphic in a great many ways.

Almost as soon as they exist, something more effective and productive will become available.

SteveG00

The hypothesis is that human Whole-Brain Emulation will not be a recognizable stage in the development of AGI that lasts for any significant amount of time. Also, an "algorithmic economy" of human whole-brain emulations is highly unlikely to be anything but science fiction.

The goal is to examine whether there are some fundamental flaws in the the nature of this forecast.

I will lay out the case after more opinions and reading material are available to us...

SteveG00

I am keen to explore WBEs of other animals, but let's focus on humans and their plausible successors for the moment.

There are complications well worth considering, of course...an animal mind could be economically productive, and a human-animal chimera emulation might seem to be one plausible successor....evidence for that forecast could also be developed...

So many questions...

SteveG00

Technology which can predict whether an action would be approved by a person or by an organization is:

-Practical to create, first applied to test cases, then to limited circumstances, then in more general cases.

-For the test cases and for the limited circumstances, it can be created using some existing machine learning technology without deploying full-scale natural language processing.

-Approval/disapproval is a binary value, and appropriate machine learning approaches would includes logistic regression or forest-and-trees methods. We create a model using... (read more)

SteveG00

In addition to determining whether an action would be approved using a priori reasoning, an approval-directed AI could also reference a large database of past actions which have either been approved or disapproved.

Alternatively, in advance of ever making any real-world decision, the approval-directed AI could generate example scenarios and propose actions to people deemed effective moral reasoners many thousands of times. Their responses would greatly assist the system in constructing a model of whether an action is approvable, and by whom.

A lot of approval data could be created fairly readily. The AI can train on this data.

SteveG-20

Paul, I think you're headed in a good direction here.

On the subject of approval-directed behavior:

One broad reason people and governments disapprove of behaviors is that they break the law or violate ethical norms that supplement laws. A lot of AGI disaster seems to incorporate some law-breaking pretty early on.

Putting aside an advanced AI that can start working on changing the law, shouldn't one thing (but not the only thing) an approval-directed AI do is constantly check whether its actions are legal before doing them?

The law by itself is not a complete... (read more)

SteveG10

That's pretty cool-could you explain to me how it does not cause us to kill people who have expansive wants in order to reduce the progress toward entropy which they cause?

I guess in your framework the goal of Superintelligence is to "Postpone the Heat Death of the Universe" to paraphrase an old play?

0yates9
I think it might drive toward killing of those who have expensive wants that also do not occupy a special role in the network somehow. Maybe a powerful individual which is extremely wasteful and which is actively causing ecosystem collapse by breaking the network should be killed to ensure the whole civilisation can survive. I think the basic desire of a Superintelligence would be identity and maintaining that identity.. in this sense the "Postopone the Heat Death of the Universe" or even reverse that would definitely be its ultimate goal. Perhaps it would even want to become the universe. (sorry for long delay in reply I don't get notifications)
SteveG00

The picture of Superintelligence as having and allowing a single values systems is a Yudowsky/Bostrom construct. They go down this road because they anticipate disaster along other roads.

Meanwhile, people invariably will want things that get in the way of other people's wants.

With or without AGI, some goods will be scarce. Government and commerce will still have to distribute these goods among people.

For example, some people will wish to have as many children or other progeny as they can afford, and AI and medical technology will make it easier for peo... (read more)

SteveG20

Having a social contract with your progenitors seems to have some intergenerational survival value. I would offer that this social contract may even rank as an instrumental motivation, but I would not count on another intelligence to evolve it by itself.

Typically, while some progenitors value their descendants enough to invest resources in them, progenitors will not wish to create offspring who want to kill them or technology which greatly assists their descendants in acting against the progenitor's goals. (There are exceptions in the animal world, thoug... (read more)

SteveG20

Stunting, tripwires, and designing limitations on the AI's goals and behaviors may be very powerful tools.

We are having a hard time judging how powerful they are because we do not have the actual schemes for doing so in front of us to judge.

Until engineering specifications for these approaches start to be available, the jury will still be out.

We certainly can imagine creating a powerful but stripped-down AGI component without all possible functionality. We can also conceive of ways to test it.

Just to get the ball rolling, consider running it one hundred t... (read more)

SteveG20

This perfect utility function is an imaginary, impossible construction. It would be mistaken from the moment it is created.

This intelligence is invariably going to get caught up in the process of allocating certain scarce resources among billions of people. Some of their wants are orthogonal.

There is no doing that perfectly, only well enough.

People satisfice, and so would an intelligent machine.

0TRIZ-Ingenieur
I fully agree. Resource limitation is a core principle of every purposeful entity. Matter, energy and time never allow maximization. For any project constraints culminate down to: Within a fixed time and fiscal budget the outcome must be of sufficient high value to enough customers to get ROI to make profits soon. A maximizing AGI would never stop optimizing and simulating. No one would pay the electricity bill for such an indecisive maximizer. Satisficing and heuristics should be our focus. Gerd Gigerenzer (Max Planck/Berlin) published this year his excellent book Risk Savvy in English. Using the example of portfolio optimization he explained simple rules when dealing with uncertainty: * For a complex diffuse problem with many unknowns and many options: Use simple heuristics. * For a simple well defined problem with known constraints: Use a complex model. The recent banking crisis gives proof: Complex evaluation models failed to predict the upcoming crisis. Gigerenzer is currently developing simple heuristic rules together with the Bank of England. For the complex not well defined control problem we should not try to find a complex utility function. With the advent of AGI we might have only one try.
2Sebastian_Hagen
How do you know? It's a strong claim, and I don't see why the math would necessarily work out that way. Once you aggregate preferences fully, there might still be one best solution, and then it would make sense to take it. Obviously you do need a tie-breaking method for when there's more than one, but that's just an optimization detial of an optimizer; it doesn't turn you into a satisficer instead.
SteveG00

So we are considering a small team with some computers claiming superior understanding of what the best set of property rights is for the world?

Even if they are generally correct in their understanding, by disrespecting norms and laws regarding property, they are putting themselves in the middle of a billion previously negotiated human-to-human disputes and ambitions, small and large, in an instant. Yes, that is foolish of them.

Human systems like those which set property rights either change over the course of years, or typically the change is associated ... (read more)

0Sebastian_Hagen
No. That would be worked out by the FAI itself, as part of calculating all of the implications of its value systems, most likely using something like CEV to look at humanity in general and extrapolating their preferences. The programmers wouldn't need to, and indeed probably couldn't, understand all of the tradeoffs involved. There are large costs to that. People will die and suffer in the meantime. Parts of humanity's cosmic endowment will slip out of reach due to the inflation of the universe, because you weren't willing to grab the local resources needed to build probe launchers to get to them in time. Other parts will remain rechable, but will have decreased in negentropy due to stars having continued to burn for longer than they needed to. If you can fix these things earlier, there's a strong reason to do so.
SteveG00

I hear you.

The issue THEN, though, is not just deterring and controlling an early AGI. The issue becomes how a population of citizens (or an elite) control a government that has an early AGI available to it.

That is a very interesting issue!

0TRIZ-Ingenieur
A mayor intelligence agency announced recently to replace human administrators by "software". Their job is infrastructure profusion. Government was removed from controlling post latest in 2001. Competing agencies know that the current development points directly towards AGI that disrespects human property rights - they have to strive for similar technology.
SteveG30

Just to go a bit further with Pinkner, as an exercise try for once to imagine a Nuturing AGI. What would it act like? How would it be designed?

SteveG10

On infrastructure profusion:

What idiot is going to give an AGI a goal which completely disrespects human property rights from the moment it is built?

Meanwhile, an AGI that figured out property rights from the internet would have some idea that if it ignored property rights, people would want to turn it off. If it has goals which were not possible to achieve once turned off, then it would respect property rights for a very long time as an instrumental goal.

And I do believe we should be able to turn off an off-the-grid AGI running on a limited amount of com... (read more)

2the-citizen
Wouldn't most AGI goals disregard property rights unless it was explicitly built in? And if it was built in, wouldn't an AGI just create a situation (eg. progressive blackmail or deception or something) where we wanted to sell it the universe for a dollar?
4Sebastian_Hagen
It would be someone with higher values than that, and this does not require any idiocy. There are many things wrong with the property allocation in this world, and they'll likely get exaggerated in the presence of higher technology. You'd need a very specific kind of humility to refuse to step over that boundary in particular. Not necessarily "a very long time" on human timescales. It may respect these laws for a large part of its development, and then strike once it has amassed sufficient capability to have a good chance at overpowering human resistance (which may happen quite quickly in a fast takeoff scenario). See Chapter 6, "An AI takeover scenario".
4Lumifer
A government :-P
SteveG30

On the Pinkner excerpt:

He is part way to a legitimate point.

The distinction is not between male and female. Instead, the issue is whether to design a mind around the pursuit of a mathematically optimal single objective.

Pinker is right that singlemindedly pursuing a single, narrow objective would be psychotic for a person.

Meanwhile, Omohundro points out that the amount of computing time required to use a computerized optimization method to make decisions explodes as more knowledge about the real world is built into the optimization.

Herbert Simon, meanwhil... (read more)

4Sebastian_Hagen
Why? As you say, humans don't. But human minds are weird, overcomplicated, messy things shaped by natural selection. If you write a mind from scratch, while understanding what you're doing, there's no particular reason you can't just give it a single utility function and have that work well. It's one of the things that makes AIs different from naturally evolved minds.
3SteveG
Just to go a bit further with Pinkner, as an exercise try for once to imagine a Nuturing AGI. What would it act like? How would it be designed?
SteveG10

This is an entire direction of research which deserves vastly more than a single throwaway line in one blog. There should be a whole thread just about this, then a proposal, then a research team on it.

SteveG60

I hear you and kind of agree. On the other hand, when a 3-year old lies, sometimes they to pull it off.

Load More