All of jmmcd's Comments + Replies

Also "the teacher smiled"? Damn your smugness, teacher!

I'm enjoying these posts.

you do get to decide whether or not to perceive it as a complement or an insult.

compliment

dieties

believed

1So8res
Fixed, thanks.

Harry should trick Voldemort into biting him, and then use his new freedom to bite him back.

From that Future of Life conference: if self-driving cars take over and cut the death rate from car accidents from 32000 to 16000 per year, the makers won't get 16000 thank-you cards -- they'll get 16000 lawsuits.

Yes, that's the point.

(I think sphexish is Dawkins, not Hofstadter.)

0Risto_Saarelma
Hofstadter uses it heavily in Gödel, Escher, Bach in 1979 as the metaphor for things that are unable to Jump Out Of The System. Dawkins only had The Selfish Gene out by then, and The Selfish Gene wasn't really about algorithmic rigidity.

I think it's a bit of a leap to go from NASA being under-funded and unambitious in recent years to "people 50 years from now, in a permanently Earth-bound reality".

-1advancedatheist
Some people, like Keith Henson, argue that we've blown the thermodynamic opportunity to get off planet because we've already squandered the best quality fossil fuels.

Not sure if it's in HPMOR but the symbol for the deadly hallows contains two right triangles.

EDIT err, deathly, I guess. I don't seem to be a trufan.

I'm afraid I won't have time to give you more help. There's a short summary of each sequence under the link at the top of the page, so it won't take you forever to see the relevance.

EDIT: you're wondering elsewhere in the thread why you're not being well received. It's because your post doesn't make contact with what other people have thought on the topic.

0David Scott Krueger (formerly: capybaralet)
I put "enjoy itself" in quotes, because I don't mean it literally. The questions that that sequence addresses according to the summary don't seem relevant to what I am trying to get at. I guess I need to be more precise. I just mean how can we maximize the integral of experience through time (whether we let experience take negative values is a detail). This was one of Tegmark's proposals in that paper, already, except he is writing in terms of a final goal instead of a process, which was the point of my post... "The amount of consciousness in our Universe, which Giulio Tononi has argued corresponds to integrated information"

So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

Maybe read the Fun Theory sequence?

-3David Scott Krueger (formerly: capybaralet)
Maybe tell me why I should? My time is valuable.
0Manfred
I dunno if the universe can read, jmmcd. ;P

It might useful to look at Pareto dominance and related ideas, and the way they are used to define concrete algorithms for multi-objective optimisation, eg NSGA2 which is probably the most used.

OP mentions "I used less water in the shower", so is obviously not only looking for extraordinary outcomes. So "saving the world" does indeed sound silly.

Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.

That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.

Off-topic:

I'm not talking about a basic vocabulary, but a vocabulary beyond that of the average, white, English-as-a-first-language adult.

Why white?

0aletheianink
I have no idea why I put that. I was trying to just be very specific, so people wouldn't ask "well, what if they hadn't heard of x" or whatever ... it may be because I'm used to reading about the entitlement of average, white, English-speaking people (specifically men), and just linked that in without thinking. It's irrelevant, so I'll go fix it - thanks.

Golly, that sounds to me as if the people of this age don't go to heaven!

it's unclear to me how the category of "evolutionary restrictions" could apply to rationality techniques. Suggestions?

Not sure if this simple example is what you had in mind, but -- evolution wasn't capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn't evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say "my invention passes the EOC because of the "evolutionary restrictions" clause".

And more important, its creators want to be sure that it will be very reliable before they switch it on.

can read the statement on its own

I like the principle behind Markdown: if it renders, fine, but if it doesn't, it degrades to perfectly readable plain-text.

A percentage is just fine.

I like the principle, but 5% is "extremely unlikely"? Something that happens on the way to work once every three weeks?

2ChristianKl
It can be a bit scary, but in a lot of domains that's exactly what people mean when they say extremly unlikely. It's extremly unlikely that humans aren't responsible for global warming.

"X as a Y" is an academic idiom. Sounds wrong for the target audience.

Not being able to have any children, or as many as you (later realised you) wanted.

The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.

the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

3wedrifid
If the finding was that humans pretending to be AIs failed then this would weaken the point. As it happens the reverse is true.

This discussion isn't getting anywhere, so, all the best :)

O.K, demonstrate that the idea of deterrent exists somewhere within their brains.

Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca's?

You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.

something can be good science without in any way being moral that Sam Harris would recog

... (read more)
1Carinthium
A- Not so. If the human does not consciously nor subconsciously care about deterrent, evolutionary reasons are irrelevant. B- Only if, and this is a big if, you agree with the Elizier-Harris school of thought which say some things are morally true by definition. Because Harris agrees with him, I was granting him that as his own unique idea of what being moral is. However, at that point I was concerned with demonstrating morality cannot fit as a subcategory of science. C- Harris appears to claim that there is a scientific basis for valuing wellbeing- he repudiates the hypothesis that there is none explicitly by claiming it comparable to the claim there is no scientific basis for valuing health.

If you claim that evolutionary reasons are a person's 'true preferences'

No, of course not. It's still wrong to say that deterrent is nowhere in their brains.

Concerning the others:

Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives 'facts' which are only useful to those who wish to follow a moral route.

I don't see what "goals which run directly counter to science" could mean. Even if you want to destroy all scientists, are you better off knowing some scienc... (read more)

0Carinthium
A- O.K, demonstrate that the idea of deterrent exists somewhere within their brains. B- Although it would be as alien as being a paperclip maximiser, say I deliberately want to know as little as possible. That would be a hypothetical goal for which science would not be useful. As for how this counters Harris- Harris claims that some things are moral by definition and claims that proper morality is a subcategory of science. I counterargue that the fundamental differences between the nature of morality and the nature of science are problems with this categorisation. I'm not sure if Harris's health analogy is relevant enough to this part of the argument to put here, but it falls flat because health is relevant to far more potential human goals than morality is. Moral dilemnas in which a person has to choose between two possible moral values are plausibly enough adressed (though I have reservations) I'll give him a pass on that one- but what about a situation where a person has to choose between acting selfishly and acting selflessly? You can say one is the moral choice by defintion depending on the definition of moral, but saying "It's moral so do it" leads to the question "Why should I do what is moral"? With health people don't actually question it because it tends to support their goals, although there is a similarity Harris and his critics do not appear to realise in that a person can and might ask "Why should I do what is healthy?" in some circumstances. C- What I am trying to say argue with my psycopath analogy is that something can be good science without in any way being moral that Sam Harris would recognise as 'moral'. The psycopath is in my scenario using the scientific method in every way except those which he can't by definition given his goals- he even has a peer review commitee! His behaviour is therefore just as scientific as the scientist trying to, say, cure cancer. D- I was only acting from what I read in his responses to the critics, which was m

I disagree with all your points, but will stick to 4: "Deterrent is nowhere in their brains" is wrong -- read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.

2Carinthium
Evolutionarily it is a REASON why the desire evolved that way, but it is not the same thing as what the person FEELS, on a conscious or subconscious level. If you claim that evolutionary reasons are a person's 'true preferences', then it follows that a proper morality should focus on maximising everyone's relative shares of the gene pool at the expense of, say, animals rather than anything else. EDIT: I'm also curious about your response to all of my arguments.

Nevertheless, moral questions aren't (even potentially) empirical, since they're obviously seeking normative and not factual answers.

You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.

control over the lower level OS allows for significant performance gains

Even if you got a 10^6 speedup (you wouldn't), that gain is not compoundable. So it's irrelevant.

access to a comparatively simple OS and tool chain allows the AI to spread to other systems.

Only if those other systems are kind enough to run the O/S you want them to run.

1Gunnar_Zarncke
It may be irrelevant in the end but not in the beginning. I'm not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.

The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn't agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn't it good enough to just optimise one's source code at the top level? Isn't there a limit to how much you can gain by running on top of a perfect O/S?

(BTW the "tower of Babel" is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python - RPython - LLVM - ??? - electrons.)

2Gunnar_Zarncke
There are lots of reasons why an optimizable OS and tool chain are relevant: * control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time). * access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more 'simple', powerful, effective and possile to hide than spreaing via text interface. * a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.

Agreed, but I think given the kind-of self-deprecating tone elsewhere, this was intended as a jibe at OP's own superficial knowledge rather than at the transportation systems of developing countries.

Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the "take over the universe" step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?

2Baughn
Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn't wait? If the AI isn't taking over the universe, that might leave the option open that something else will. If it doesn't control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?

That page mentions "common sense" quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.

I don't think it's useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?

To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?

On... (read more)

Status is far older than Hanson's take on it, or than Hanson himself. But the idea of seeing status signalling everywhere, as an explanation for everything -- that is characteristically Hanson. (Obviously, don't take my simplification seriously.)

The idea of talking about seeing status signaling everywhere is characteristically Hanson. I would not be surprised in the least if many smart politicians and socialites throughout history had also observed this but had the good sense not to talk about it in public.

Yes, but the next line mentioned PageRank, which is designed to deal with those types of issues. Lots of inward links doesn't mean much unless the people (or papers, or whatever, depending on the semantics of the graph) linking to you are themselves highly ranked.

3CarlShulman
Yep, a data-driven process could be great, but if what actually gets through the inertia is the simple version, this is an avenue for backfire.

Don't forget that the goal in the Turing Test is not to appear intelligent, but to appear human. If an interrogator asks "what question would you ask in the Turing test?", and the answer is "uh, I don't know", then that is perfectly consistent with the responder being human. A smart interrogator won't jump to a conclusion.

"That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it's the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can't occur for ... (read more)

I think you're right that the OP doesn't quite hit the mark, but you got carried away and started almost wilfully misinterpreting. Especially your answers to 4, 5 and 6.

In the Soviet Union religion was marginalized for some 70 years, two generations grew up in the environment of state atheism, yet soon after the restrictions were relaxed, the Church has regained almost all of the lost ground. The situation was similar in the rest of the ex-Warsaw bloc (with less time under mandated atheism), and even in China, where the equilibrium was restored after the Cultural Revolution. The standard argument [bold added] for this happening is "but Communism was basically a religion by another name", what with the various C

... (read more)

I think that interesting results which fail to replicate are almost always better-known than the failure to replicate. I think it's a fundamental problem of science, rather than a special weakness of programmers.

I really like Thinking: Right and Wrong, but if there is a danger that Right be misconstrued as conservative, then how about a variant? This is my only suggestion and it doesn't sound as good but there must be better:

Thinking: Good and Bad

"Thinking: Wrong and Less Wrong".

... but it's a bit of an in-joke. Or an in-not-exactly-joke.

"loosing" is still incorrect.

In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.

Suggest making the link explicit with something like this: "in spite of the fact that they're apparently irrational enough to be part of that public in the first place."

0ChrisHallquist
Gah. Now it should be fixed.

I'm hoping in particular that someone used to feel this way—shutting down an impulse to praise someone else highly, or feeling that it was cultish to praise someone else highly—and then had some kind of epiphany after which it felt, not allowed, but rather, quite normal.

I think there is a necessary distinction between matter-of-fact praising someone highly, and engaging in various sucking-up behaviours such as echoing particular forms of words, or quoting-as-authority. The latter do leave an unpleasant taste and in those cases I can understand the "cult" reaction.

Oh god. Everyone stop talking.

For small vices, it is perhaps more important to ask, "What works?"

http://en.wikipedia.org/wiki/Broken_windows_theory

This is a bit like the "look before you leap", "no, who hesitates is lost" game.

Load More