All of JamesAndrix's Comments + Replies

Anecdote: I think I've had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.

It does take a lot to crosss those inferential distances, but I don't think quite that much.

To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.

Sucks less sucks less.

1Decius
What's the adjectival form of suck?

One trouble is that that is essentailly tacking mind enslavement on to the WBE proposition. Nobody wants that. Uploads wouldn't volunteer. Even if a customer paid enough of a premium for an employee with loyalty modifications, that only rolls us back to relying on the good intent of the customer.

This comes down to the exact same arms race between friendly and 'just do it' . With extra ethical and reverse-engineering hurdles. (I think we're pretty much stuck with testing and filtering based on behavior. And some modification will only be testable after uploading is available)

Mind you I'm not saying don't do work on this, I'm saying not much work will be done on it.

0JoshuaFox
Yes, creating WBEs or any other AIs that may have personhood, brings up a range of ethical issues on top of preventing human extinction.

I think we're going to get WBE's before AGI.

If we viewed this as a form of heuristic AI, it follows from your argument that we should look for ways to ensure friendliness of WBE's. (Ignoring the ethical issues here.)

Now, maye this is becasue most real approaches would consider ethical issues, but it seems like figuring out how to modify a human brain so that it doesn't act against your interests even if is powerful and without hampering its intellect, is a big 'intractable' problem.

I suspect no one is working on it and no one is going to, even though we... (read more)

0JoshuaFox
Right, SI's basic idea is correct. However, given that WBE's will in any case be developed (and we can mention IA as well) , I'd like to see more consideration of how to keep brain-based AI's as safe as possible before they enter their Intelligence Explosion -- even though we understand that after an Explosion, there is little you can do.

Moody set it as a condition for being able to speak as an equal.

There is some time resolution.

Albus said heavily, "A person who looked like Madam McJorgenson told us that a single Legilimens had lightly touched Miss Granger's mind some months ago. That is from January, Harry, when I communicated with Miss Granger about the matter of a certain Dementor. That was expected; but what I did not expect was the rest of what Sophie found."

"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

http://en.wikipedia.org/wiki/Clarke%27s_three_laws

That would make (human[s] + predictor) in to an optimization process that was powerful beyond the human[s]'s ability to steer. You might see a nice looking prediction, but you won't understand the value of the details, or the value of the means used to achieve it. (Which would be called trade-offs in a goal directed mind, but nothing weighs them here.)

It also won't be reliable to look for models in which you are predicted to not hit the Emergency Regret Button As that may just find models in which your regret evaluator is modified.

0John_Maxwell
Is a human equipped with Google an optimization process powerful beyond the human's ability to steer?

For example, a hat and a cloak may be a uniform in a secret society, to be worn in special circumstances.

I much like the idea of this being a standard spell, as that provides further cover for your identity.

They Guy Fawkes mask is the modern equivalent.

Almost any human existential risk is also a paperclip risk.

Foundations of Neuroeconomic Analysis

Without getting into the legal or moral issues involved, there is a """library""" 'assigned to the island state of Niue', it's pretty damned good, and that's all I have to say about that.

1lukeprog
Gah! It wasn't there when I was looking many months ago.

and secondly, a medievalesque public school is such a stereotypically British environment that one expects the language to match.

During the Revolution, Salem witches were considerably more adept at battle magic than those taught at the institution that had been sucking magical knowledge out of the world for the previous 600 years. They also had the advantage of being able to train in the open since most Puritans were self-obliviating.

It wasn't until the 1890's that the school returned fully to Ministry control after the retirement of Headmaster Teetonka... (read more)

So does Dumbledore know that Snape is putting the Sorcerer's Stone back into Gringotts?

by the time year one ends, she and Harry will be participating side by side against serious, life threatening issues.

Absolutely not.

Draco will be in between them.

4Raemon
I'm pretty positive Draco and Hermione will be flanking him.
4pedanterrific
Dramiorry: OT3 for lyfe.

A cheap talk round would favor CliqueBots.

That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?

wedrifid100

That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?

O is tit-for-tat with 3 defections at the end (TFT3D). It has extra rules to help with weird players and randoms. Those don't matter much once the weird guys are gone and it becomes a game between the tit-for-tats so ignore t... (read more)

Is there an easy way to change the logo/name?

It would be good to have a more generic default name and header, as this takes off there will be half finished sites turning up in google.

0atucker
Good thing to consider, though I'm not sure if this is actually going to happen. I'd guess that you need to do some configuration on your home router in order for your local site to show up to the rest of the internet. Does Google crawl fast enough to catch people's personal computer's IPs? I'd be surprised if they did. I hope to see a lot of forks of LW on github, but half-finished code on the internet doesn't look that bad.

I will try to get a torrent up shortly (never created a torrent before)

--Posted from the lesswrong VM

Edit: am I doing this right? Will seed with fiber.

You should all attribute this event to my wishing for it earlier today.

5Joey_Goldman
Love some good post hoc ergo propter hoc.

Please paraphrase the conclusion in the introduction. This should be something more like an abstract, so I can an answer with minimal digging.

The opposite end of this spectrum has network news teasers. "Will your childrens' hyberbolic discounting affect your retirement? Find out at 11"

When I saw that, I thought it was going to be an example of a nonsensical question, like "When did you stop beating your wife?".

I get writers block, or can't get past a simple explanation of an idea, unless I'm conversing online (usually some form of debate) in which case I can write pages and pages with no special effort.

I generally go with cross domain optimization power. http://wiki.lesswrong.com/wiki/Optimization_process Note that optimization target is not the same thing as a goal, and the process doesn't need to exist within obvious boundaries. Evolution is goalless and disembodied.

If an algorithm is smart because a programmer has encoded everything that needs to be known to solve a problem, great. That probably reduces potential for error, especially in well-defined environments. This is not what's going on in translation programs, or even the voting system here. (ba... (read more)

Making it more accurate is not the same as making it more intelligent. The question is: How does making something "more intelligent" change the nature of the inaccuracies? In translation especially there can be a bias without any real inaccuracy .

Goallessness at the level of the program is not what makes translators safe. They are safe because neither they nor any component is intelligent.

3asr
Most professional computer scientists and programmers I know routinely talk about "smart", "dumb", "intelligent" etc algorithms. In context, a smarter algorithm exploits more properties of the input or the problem. I think this is a reasonable use of language, and it's the one I had in mind. (I am open to using some other definition of algorithmic intelligence, if you care to supply one.) I don't see why making an algorithm smarter or more general would make it dangerous, so long as it stays fundamentally a (non-self-modifying) translation algorithm. There certainly will be biases in a smart algorithm. But dumb algorithms and humans have biases too.

It seems that the narrative of unfriendly AI is only a risk if an AI were to have a true goal function, and many useful advances in artificial intelligence (defined in the broad sense) carry no risk of this kind.

What does it mean for a program to have intelligence if it does not have a goal? (or have components that have goals)

The point of any incremental intelligence increase is to let the program make more choices, and perhaps choices at higher levels of abstraction. Even at low intelligence levels, the AI will only 'do a good job' if the basis of th... (read more)

0Nebu
This is a very interesting question, thanks for making me think about it. (Based on your other comments elsewhere in this thread), it seems like you and I are in agreement that intelligence is about having the capability to make better choices. That is, two agents given an identical problem and identical resources to work with, the agent that is more intelligent is more likely to make the "better" choice. What does "better" mean here? We need to define some sort of goal and then compare the outcome of their choices and how closely those outcome matches those goals. I have a couple of disorganized thoughts here: * The goal is just necessary for us, outsiders, to compare the intelligence of the two agents. The goal is not necessary for the existence of intelligence in the agents if no one's interested in measuring their intelligence. * Assuming the agents are cooperative, you can temporarily assign subgoals. For example, perhaps you and I would like to know which one of us is smarter. You and I might have many different goals, but we might agree to temporarily take on a similar goal (e.g. win this game of chess, or get the highest amount of correct answers on this IQ test, etc.) so that our intelligence can be compared. * The "assigning" of goals to an intelligence strongly implies to me that goals are orthogonal to intelligence. Intelligence is the capability to fulfil any general goal, and it's possible for someone to be intelligent even if they do not (currently, or ever) have any goals. If we come up with a new trait called Sodadrinkability which is the capability to drink a given soda, one can say that I possess Sodadrinkability -- that I am capable of drinking a wide range of possible sodas provided to me -- even if I do not currently (or ever) have any sodas to drink.
2asr
Consider automatic document translation. Making the translator more complex and more accurate doesn't imbue it with goals. It might easily be the case that in a few years, we achieve near-human accuracy at automatic document translation without major breakthroughs in any other area of AI research.

I suspect Richard would say that the robot's goal is minimizing its perception of blue. That's the PCT perspective on the behavior of biological systems in such scenarios.

This 'minimization' goal would require a brain that is powerful enough to believe that lasers destroy or discolor what they hit.

If this post were read by blue aliens that thrive on laser energy, they'd wonder they we were so confused as to the purpose of a automatic baby feeder.

1MarkusRamikin
< If this post were read by blue aliens that thrive on laser energy, they'd wonder they we were so confused as to the purpose of a automatic baby feeder. Clever!
4pjeby
From the PCT perspective, the goal of an E. coli bacterium swimming away from toxins and towards food is to keep its perceptions within certain ranges; this doesn't require a brain of any sort at all. What requires a brain is for an outside observer to ascribe goals to a system. For example, we ascribe a thermostat's goal to be to keep the temperature in a certain range. This does not require that the thermostat itself be aware of this goal.

Hypothesis: Quirrell is positioning Harry to be forced to figure out how to dissolve the wards at Hogwarts. (or at least that's the branch of the Xanatos pileup we're on.)

I have two reasons not to use your system:

One: If you're committed to doing the action if you yourself can find a way to avoid the problems, then as you come to such solutions your instinct to flinch away will declare the list 'not done yet' and add more problems, and perhaps problems more unsolvable in style, until the list is an adequate defense against doing the thing.

One way to possibly mitigate this is to try not to think of any solutions until the list is done, and perhaps some scope restrictions on the allowable conditions. Despite this, there is another problem:

Two: The sun is too big.

2pwno
It's a good exercise in finding your true objections.
5handoflixue
This is my new favourite objection :)
2Normal_Anomaly
I'm afraid I don't get your joke. Does this have anything to do with the system itself, or is it just an example of an insurmountable obstacle?
2Alicorn
How big is too big?

No, not learning. And the 'do nothing else' parts can't be left out.

This shouldn't be a general automatic programing method, just something that goes through the motions of solving this one problem. It should already 'know' whatever principles lead to that solution. The outcome should be obvious to the programmer, and I suspect realistically hand-traceable. My goal is a solid understanding of a toy program exactly one meta-level above hanoi.

This does seem like something Prolog could do well, if there is already a static program that does this I'd love to see it.

With 2 differences: CEV is tries to correct any mistakes in the initial formulation of the wish(aiming for an attractor), and it doesn't force the designers to specify details like whether making bacteria is ok or not-ok.

It's the difference between painting a painting of a specific scene, and making an auto-focus camera.

I do currently think it is possible to create a powerful cross-domain optimizer that is not a person and will not create persons or unbox itself or look at our universe or tile the universe with anything or make AI that doesn't comply with... (read more)

Minor correction: It may need a hack if it remains unsolved.

There seems to be several orders of magnitude of difference between the two solutions for coloring a ball. You should have better predictions than that for what it can do. Obviously you shouldn't run anything remotely capable of engineering bacteria without a much better theory about what it will do.

I suspect "avoiding changing the world" actually has some human-values baked into it.

This seems to be trying to box an AI with it's own goal system, which I think puts it in the tricky-wish category.

1whpearson
See my reply to Vladimir Nesov. Do you count CEV to be in the same category?

I simply must get into the habit of asking for money.

Not doing this is probably my greatest failing.

2Kevin
First lesson of sales is that you have to ask to make the sale.

Well, through seeing red, yes ;-)

Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)

I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.

If there's a difference in the experience, then there's information about the difference,

The information about the difference is included in Mary's education. That is what was given.

Thus, there's a difference in my state, and thus, something to be surprised about.

Are you surprised all the time? If the change in Mary's mental state is what Mary expected it to be, then there is no surprise.

The word "red" is not equal to red, no matter how precisely you define that word.

How do you know?

If "red" is truly a material subject -- s

... (read more)
2thomblake
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary's knowing what it's like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
0orthonormal
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else's).

No matter how much information is on the menu, it's not going to make you feel full.

"Feeling full" and "seeing red" also jumbles up the question. It is not "would she see red"

In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is.

But isn't your "knowing what something is like" based on y... (read more)

3pjeby
If there's a difference in the experience, then there's information about the difference, and surprise is thus possible. How, exactly? How will this knowledge be represented? If "red" is truly a material subject -- something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge "about" this is necessarily separate from the thing itself. The word "red" is not equal to red, no matter how precisely you define that word. (Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see "colors" that don't correspond to specific light frequencies. There are other complications to color vision as well.) Any knowledge of red that doesn't include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different. That's because in any hypothetical state where I'm thinking "that's what red is", my mental state is not "red", but "that's what red is". Thus, there's a difference in my state, and thus, something to be surprised about. Trying to say, "yeah, but you can take that into account" is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you're in still contains the label, "this is what I think it would be like", and lacks the portion of that state containing the actual experience of red.

However, materialism does not require us to believe that looking at a menu can make you feel full.

Looking at a menu is a rather pale imitation of the level of knowledge given Mary.

In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that."

That is the conclusion you're asserting. I contend that she can know, that there is nothing left for her to be surprised about whe... (read more)

1pjeby
No matter how much information is on the menu, it's not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like. In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is. This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent "this experience is imaginary".) However, none of this matters in the slightest with regard to dissolving Mary's Room. I'm simply pointing out that it isn't necessary to assume perfect knowledge in order to dissolve the paradox. It's just as easily dissolved by assuming imperfect knowledge. And all the evidence we have suggests that the knowledge is -- and possibly must -- be imperfect. But materialism doesn't require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.

I think the idea that "what it actually feels like" is knowledge beyond "every physical fact on various levels" is just asserting the conclusion.

I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We've never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we're also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.

I think the chinese room has... (read more)

8pjeby
Ah, but what conclusion? I'm saying, it doesn't matter whether you assume they're the same or different. Either way, the whole "experiment" is another stupid definitional argument. However, materialism does not require us to believe that looking at a menu can make you feel full. So, there's no reason not to accept the experiment's premise that Mary experiences something new by seeing red. That's not where the error comes from. The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction. However, since materialism doesn't require this premise, there's no reason to assume it. I don't, so I see no contradiction in the experiment. If you think that you can be "smart enough" then you are positing a different brain architecture than the ones human beings have. But let's assume that Mary isn't human. She's a transhuman, or posthuman, or some sort of alien being. In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that." At this point, we've reduced the "experiment" to an absurdity, because now Mary has experienced "red". Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over. Not exactly. It's an intuition pump, drawing on your intuitive sense that the only thing in the room that could "understand" Chinese is the human... and he clearly doesn't, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn't apply. For that matter, suppose you replace the chinese room with a brain filled with individual computing units... then the same "experiment" "proves" that brains can't po

What is it that she's surprised about?

3pjeby
The difference between knowing what seeing red is supposed to feel like, and what it actually feels like.

From what you quoted I thought you were arguing that there was something for her to be surprised about.

4pjeby
Of course there's something for her to be surprised about. The non-materialists are merely wrong to think this means there's something mysterious or non-physical about that something.

Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.

I hate this whole scenario for this kind of "This knowledge is a given but wait no it is not." kind of thinking.

Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.

-2Peterdjones
OK. Do you know that? Does Mary?
0orthonormal
It's at least evidence about the way our minds model other minds, and as such it might be helpful to understand where that intuition comes from.
5pjeby
Huh? That sounds confused to me. As I said, I can "know" how it would feel to be betrayed by a friend, without actually experiencing it. And that difference between "knowing" and "experiencing" is what we're talking about here.
-2wedrifid
That does sound fallacious. Fortunately you don't need additional evidence. An even better proposal: You should put the answer in the prologue and then not bother writing a story at all. Because we moved on from that kind of superstition years ago.

There is a definition of terms confusion here between "inherently evil" and "processing data absolutely wrong".

I also get the impression that much of Europe is an extremely secular society that does OK.

There is confusion for individuals transitioning and perhaps specific questions that need to be dealt with by societies that are transitioning. But in general there is already a good tested answer for what religion can be replaced by. Getting that information to the people who may transition is trickier.

Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.

Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.

The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.

-1lessdazed
I thoroughly endorse this comment. Just a note relevant for people involved in the discussion on this page regarding upvoting and downvoting. This is a sort of situation in which I might downvote lessdazed's comment below, simply to increase local contrast between the vote totals of responses to the parent (so long as I did not push the score of the below comment into the negatives). This is true even though I (happen to ;-)) agree with the below comment. Downvoting is not a personal thing, and if you take it personally, it is probably because it happens to be so for you and you are projecting your voting behavior onto others. In all discussions of voting I've seen, people have different criteria. Apologies for metaness and thread hijack.

Are the effects of the alien practical joke curable?

This Buffalonian should be able to go in the future, but the more notice the better.

james.andrix@gmail.com

So when is your book on rationality coming out?

3David_Gerard
When he's posted here daily for a couple of years, obviously ;-)

Would it also be moral to genetically engineer a human so that it becomes suicidal as a teenager?

0TobyBartels
It would be immoral to genetically engineer suicidal depression, and it would be immoral to engineer the desire to die in this society, where it cannot easily be fulfilled. But imagine that puberty, instead of leading people to want to have sex, led us (or some of us) to want to die. While this might be as bad as puberty currently is, with new hormones and great confusion, hopefully a competent genetic engineer would actually make it better. No depression here, but looking forward to becoming an adult, with all that this entails. Presumably the engineer even has some purpose in mind, but even if not, I'm sure that society is more than capable of making one up. There must already be a science fiction story out there with this premise, but I don't know one.

Imagine two Universes, both containing intelligent beings simulating the other Universe.

I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.

3Vladimir_Nesov
Consider an agent that has to simulate itself in order to understand consequences of its own decisions. Of course, there's bound to be some logical uncertainty in this process, but the agent could have exact definition of itself, and so eventually ability to see all the facts. For two agents, that's a form of acausal communication (perception). (This is meaningless only in the same sense as ordinary simulation hypothesis is meaningless.)
3Document
It's one of the implications of a universe that can compute actual infinities; it's been proposed in ficton, but I don't know about beyond that.
Load More