All of Apteris's Comments + Replies

Apteris10

Let me clarify why I asked. I think the "multiple layers of abstraction" idea is essentially "build in a lot of 'manual' checks that the AI isn't misbehaving", and I don't think that is a desirable or even possible solution. You can write n layer of checks, but how do you know that you don't need n+1?

The idea being--as has been pointed out here on LW--that what you really want and need is a mathematical model of morality, which the AI will implement and which moral behaviour will fall out of without you having to specify it explicitly. ... (read more)

Apteris20

What happens if an AI manages to game the system despite the n layers of abstraction?

0[anonymous]
This is the fundamental problem that is being researched - the top layer of abstraction would be that difficult to define one called "Be Friendly". Instead of friendly AI maybe we should look at "dont be an asshole" AI (DBAAAI) - this may be simpler to test and monitor.
Apteris00

Your argument would be stronger if you provided a citation. I've only skimmed CEV, for instance, so I'm not fully familiar with Eliezer strongest arguments in favour of goal structure tending to be preserved (though I know he did argue for that) in the course of intelligence growth. For that matter, I'm not sure what your arguments for goal stability under intelligence improvement are. Nevertheless, consider the following:

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we wer

... (read more)
0Luke_A_Somers
Sorry for not citing; I was talking with people who would not need such a citation, but I do have a wider audience. I don't have time to hunt it up now, but I'll edit it in later. If I don't, poke me. If at higher intelligence it finds that the volition diverges rather than converges, or vice versa, or that it goes in a different direction, that is a matter of improvements in strategy rather than goals. No one ever said that it would or should not change its methods drastically with intelligence increases.
Apteris00

We might be approaching a point of diminishing returns as far as improving cultural transmission is concerned. Sure, it would be useful to adopt a better language, e.g. one less ambiguous, less subject to misinterpretation, more revealing of hidden premises and assumptions. More bandwidth and better information retrieval would also help. But I don't think these constraints are what's holding AI back.

Bandwidth, storage, and retrieval can be looked at as hardware issues, and performance in these areas improves both with time and with adding more hardware. What AI requires are improvements in algorithms and in theoretical frameworks such as decision theory, morality, and systems design.

Apteris30

I think it will prove computationally very expensive, both to solve protein folding and to subsequently design a bootstrapping automaton. It might be difficult enough for another method of assembly to come out ahead cost-wise.

Apteris10

You're right, that is more realistic. Even so, I get the feeling that the human would have less and less to do as time goes on. I quote:

“He just loaded up on value stocks,” says Mr. Fleiss, referring to the AI program. The fund gained 41% in 2009, more than doubling the Dow’s 19% gain.

As another data point, a recent chess contest between a chess grandmaster (Daniel Naroditsky) working together with an older AI (Rybka, rated ~3050) and the current best chess AI (Stockfish 5, rated 3290) ended with a 3.5 - 0.5 win for Stockfish.

0Larks
I don't think an article which compares a hedge fund's returns to the Dow (a price-weighted index of about 30 stocks!) can be considered very credible. And there are fewer Quant funds, managing less money, than there were 7 years ago.
Apteris10

While not exactly investment, consider the case of an AI competing with a human to devise a progressively better high-frequency trading strategy. An AI would probably:

  • be able to bear more things in mind at one time than the human
  • evaluate outcomes faster than the human
  • be able to iterate on its strategies faster than the human

I expect the AI's superior capacity to "drink from the fire hose" together with its faster response time to yield a higher exponent for the growth function than that resulting from the human's iterative improvement.

1Lumifer
A more realistic example would be "competing with a human teamed up with a narrow AI".
Apteris00

The effectiveness of learning hyper-heuristics for other problems, i.e. how much better algorithmically-produced algorithms perform than human-produced algorithms, and more pertinently, where the performance differential (if any) is heading.

As an example, Effective learning hyper-heuristics for the course timetabling problem says: "The dynamic scheme statistically outperforms the static counterpart, and produces competitive results when compared to the state-of-the-art, even producing a new best-known solution. Importantly, our study illustrates that ... (read more)

Apteris20

Only problem is cooking. Eats up like 4 hours a week.

This article by Roger Ebert on cooking is, I suspect, highly relevant to your interests. Mine too, as a matter of fact.

Apteris30

For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.

Right you are. I did not express myself well above. Let me try and restate, just for the record.

Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position... (read more)

Apteris20

I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans' cognitive capacity? Is there a logical reason for same?

Not to make light of a serious question, but, "Equal rights for bacteria!"? I think not.

Aside: I am puzzled as to the most likely reason Esar's comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?

0JoshuaZ
This is likely more a problem of insufficient imagination. For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence. The comment was likely downvoted because these issues have been discussed here extensively, and there's the additional problem that I pointed out that it wouldn't even necessarily be in humanity's best interest for the entity to have such an ethical system.
Apteris20

I'm watching this dialogue now, I'm 45 (of 73) minutes in. I'd just like to remark that:

  1. Eliezer is so nice! Just so patient, and calm, and unmindful of others' (ahem) attempts to rile him.
  2. Robert Wright seemed more interested in sparking a fiery argument than in productive discussion. And I'm being polite here. Really, he was rather shrill.

Aside: what is the LW policy on commenting on old threads? All good? Frowned upon?

0thomblake
It's pretty much okay. If there is a recent "Sequence rerun" thread about in in Discussion, then the discussion should happen there instead, but otherwise there's no particular issues.
Apteris60

Indeed it is. But the way you fight "memetic infection" in the real world is to take a look at the bad stuff and see where it goes wrong, not to isolate yourself from harmful ideas.

2Apprentice
Yes. In this metaphor, the guard at the gates takes a look at the bad stuff and decides against letting it into the fortress.
Apteris110

Thankfully for Mr. Pratchett, you can't influence the genetic lottery or the luck fairy, so his is still valid advice. In fact, one could see "trust in yourself" et al. as invitations to "do or do not, there is no try", whereas "work hard, learn hard and don't be lazy" supports the virtue of scholarship as well as that of "know when to give up". Miss Tick is being eminently practical, and "do or do not", while also an important virtue, requires way more explanation before the student can understand it.

8Nisan
Yeah. "Do or do not" / "believe in yourself" should either be administered on a case-by-case basis by a discerning mentor, or packaged with the full instruction manual.
Apteris20

Hello LessWrong,

I've been reading the website for at least the past two years. I like the site, I admire the community, and I figured I should start commenting.

I like to think of myself as a rationalist. LW, along with other sources (Bertrand Russell, Richard Dawkins) has contributed heavily (and positively) to my mental models. Still, I have a lot of work to do.

I like to learn. I like to discuss. I used to like to engage in heated debates, but this seems to have lost some of its appeal recently--either someone is wrong or isn't, and I prefer to figure out... (read more)