All of loup-vaillant's Comments + Replies

I, on the other hand love my cello. I also happen to enjoy practice itself. This helps a lot.

I have defeated the hydra! (I had to cut off 670 heads). Feels like playing Diablo.

4NoahTheDuke
670? Lucky. I finally bested it after 1750-ish, yesterday. Once I hit 1000, I thought, "Why am I doing this? What am I proving?" and then I started clicking again.

I took the survey (answered nearly everything).

(7): indentation error. But I guess the interpreter will tell you i is used out of scope. That, or you would have gotten another catastrophic result on numbers below 10.

def is_prime(n):
for i in range(2,n):
    if n%i == 0: return False
return True

(Edit: okay, that was LessWrong screwing up leading spaces. We can cheat that with unbreakable spaces.)

2gjm
Yes, good point. I'd generalize: "I could have committed some syntax error -- wrong indentation, wrong sort of brackets somewhere, etc. -- that just happens to leave a program with correct syntax but unintended semantics." Depends on exactly what error occurs. (With the formatting the way it actually looks here it would just be a syntax error.) This (aside from the simplicity of the code) is the real reason why I'm still happy with an error probability less than 0.001. It takes a really odd sort of error to produce correct results for the first 100 candidate primes and wrong results thereafter. I could write some programs with that sort of error, but they wouldn't look at all like naive primality checkers with simple typos.

I don't like your use of the word "probability". Sometimes, you use it to describe subjective probabilities, but sometimes you use it to describe the frequency properties of putting a coin in a given box.

When you say, "The brown box has 45 holes open, so it has probability p=0.45 of returning two coins." you are really saying that knowing that I have the brown box in front of me, and I put a coin in it, I would assign a 0.45 probability of that coin yielding 2 coins. And, as far as I know, the coin tosses are all independent: no amount ... (read more)

It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:

Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.

The Idea is, many people already understand some risks associated with general purpose computers (if only for the... (read more)

Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)

Good luck finding one that doesn't also bias you into a corner.

Maybe we could explain it by magical risks, and violence. I wouldn't be surprised if wizard kill each other more than muggles. With old-fashioned manners, may come old fashioned violence. The last two wars (Grindelwald and Voldemort), were awfully close, and it looks like the next one is coming.

If all times and all countries are the same, with a major conflict every other generation, it could easily explain such a low population.

8Velorien
I think this point merits more extensive discussion. A few observations: * Wizards can learn shielding spells fairly freely, whereas the average muggle has no counter to a gun, and little they can do even against melee weapons unless they have sufficient self-defense training. * Underage magical violence is restricted by the Trace - it is considerably harder for magically-powered youth gangs to exist within magical Britain if powerful and merciless authorities (cf. Harry's treatment during the Dementor incident) are instantly alerted whenever they cast a spell. * While wizard forensics are generally laughable, a simple spell will reveal the last spells cast by a person's wand, and few people have multiple wands (since the things are apparently horribly expensive, among other reasons). This is a significant deterrent to the use of magic for illegal purposes that are likely to draw attention, such as murder. (I assume that it reveals more than the single most recent spell, since that would make it useless against anyone smart enough to cast a quick breath-freshening charm after their misdemeanours). * The last war at least was allegedly marked by most of the population of magical Britain cowering in their homes while a few brave champions fought on their behalf. The Death Eaters, meanwhile, only numbered fifty or so. That doesn't sound like it should result in a high casualty level relative to the total magical population. * Wizards are exceptionally resilient, and can survive all manner of injuries that would kill a muggle ten times over (cf. Neville Longbottom). In addition, magical healing is outstanding.

Chapter 78

Thus it had been with some trepidation that Mr. and Mrs. Davis had insisted on an audience with Deputy Headmistress McGonagall. It was hard to muster a proper sense of indignation when you were confronting the same dignified witch who, twelve years and four months earlier, had given both of you two weeks' detention after catching you in the act of conceiving Tracey.

Apparently, contraception isn't always used 7th year students. I count that as mild evidence that contraception, magical or otherwise, isn't widespread in the magical world. Method... (read more)

1Desrtopa
If contraception is significantly less widespread among wizards than among muggles, then considering their quality of medical care, their population seems anomalously low.

War. With children.

I fear the consequences if we don't solve this.

Edit: I'm serious:

This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle

1undermind
I agree that it's important and has serious consequences, but what is the puzzle?

I don't see Hermione be revived any time soon, for both story reasons and because Harry is unlikely to unravel the secrets of soul magic in mere hours, even with a time loop at his disposal.

More likely, Harry has found a reliable way to suspend her, and that would be the "he has already succeeded" you speak of.

The key part is that some of those formal verification processes involve automated proof generation. This is exactly what Jonah is talking about:

I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem).

Those who make (semi-)automated proof for a living have a vested interest in making such things as useful as possible. Among other things, this means as automated as possible, and as general as possible. They're not there yet, but they're definitely working on it.

The Prover company is working on the safety of train signalling software. Basically, they seek to prove that a given program is "safe" along a number of formal criteria. It involves the translation of the program in some (boolean based) standard form, which is then analysed.

The formal criteria are chosen manually, but the proofs are found completely automatically.

Despite the sizeable length of the proofs, combinatorial explosion is generally avoided, because programs written by humans (and therefore their standard form translation) tend to have s... (read more)

2lukeprog
I don't think this is what Jonah is talking about. This is just one of thousands of examples of formal verification of safety-critical software.

I do not lie to my readers

Eliezer

I think the facts at least are as described. Hermione is certainly lying in a pool of blood, something significant did happen to her (Harry felt the magic), and Dumbeldore definitely believe Hermione is dead.

If there is a time turner involved, it won't change those perceptions one bit, And I doubt Dumbeldore would try too Mess With Time ever again (as mentioned in the Azkaban arc). Harry might, but he's out of his Time Turner Authorized Range. Even then, it looks like he's thinking longer term than that.

2[anonymous]
Making Hermione a horcrux in the last 6 hours of her life doesn't violate any observed facts.

Recalling a video I have seen (forgot the source), the actual damage wouldn't occur upon hypoxia, but upon re-oxygenation. Lack of oxygen at the cellular level does start a fatal chemical reaction, but the structure of the cells are largely preserved. But when you put oxygen back, everything blows up (or swells up, actually).

Harry may very well have killed Hermione with his oxygen shot. If he froze her before then, it might have worked, but after that… her information might be lost.

One obvious objection: Hermione was still concious enough to say some last ... (read more)

Wizards have souls. - their minds are running on more than just wetware. I am fairly certain of this, because otherwise shape shifting would be instantly fatal.

Furthermore, a "continuous" function could very well contain a finite amount of information, provided it's frequency range is limited. But then, it wouldn't be "actually" continuous.

I just didn't want to complicate things by mentioning Shannon.

I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs.

This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.

3DSherron
Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, and the only way to tell what actions are appropriate is to examine... the... evidence... oh right. There is none, because in reality we don't live in the Matrix and there isn't any superintelligence out there in our universe. So we file away the thought, with a note that if we ever do run into evidence of such a thing (improbable events with no apparent likely cause) that we should pull it back out and check. But that's not the same as thinking about it. In reality, we don't live in that world, and to the extent that is true then the answer to "what do we do about it" is "exactly what we've always done."

Here's my guess:

  • "Continuous" is a reference to the wave function as described by current laws of physics.
  • Eliezer is "infinite set atheist", which among other things rule out the possibility of an actually continuous fabric of the universe.
4Baughn
As I've already pointed out to another infinite set atheist, you could get the appearance of a continuous wavefunction without actually requiring infinite computing power to simulate it. All you need to do is make the simulation lazy - add more trailing digits in a just-in-time fashion. Whether or not that counts as complicating the rules for the purpose of solomonoff induction is.. hard to say.
0Vaniver
That would be reasonable, but it's not clear to me what "their own view" about that would look like. My impression is that most physicists see the universe as (at least functionally) continuous, with a few people working on determining upper bounds for how small the discrete spatial elements of the universe could be, and getting results like "well, any cells would be as much smaller than our scale as our scale is from the total size of the observable universe."

By the way, why posts aren't written like comments, in Markdown format? Could we consider adding markdown formatting as an option?

I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.

Here's a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the follo... (read more)

We have to determine what counts as "unfair". Newcomb's problem looks unfair because your decision seems to change the past. I have seen another Newcomb-like problem that was (I believe) genuinely unfair, because depending on their decision theory, the agents were not in the same epistemic state.

Here what I think is a "fair" problem. It's when

  1. the initial epistemic state of the agent is independent of its source code;
  2. given the decisions of the agent, the end result is independent of its source code;
  3. if there are intermediary steps, th
... (read more)
3solipsist
It still seems to me that you can't have a BestDecisionAgent. Suppose agents are black boxes -- Omegas can simulate agents at will, but not view their source code. An Omega goes around offering agents a choice between: * $1, or * $100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test. Does this test meet your criteria for a fair test? If not, why not?

I think it is possible to prove that a given boxing works, if it's sufficiently simple. Choosing the language isn't enough, but choosing the interpreter should be.

Take Brainfuck for instance: replace the dot ('.'), which prints a character, by two other statements: one that prints "yes" and exits, and one that prints "no" and exits. If the interpreter has no bug, a program can only:

  • Print "yes" and kill itself.
  • Print "no" and kill itself.
  • Do nothing until we kill it, or otherwise fail.

Assuming the AI doesn't cont... (read more)

It's the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of "cheating". My solutions plug them all in one fell swoop.

0bogdanb
Oh, OK. In that case, what you are trying to achieve is (theoretically) boxing a (potential) AGI, without a gatekeeper. Which is kind of overkill in this case, and wouldn’t be solved with a choice of language anyway :)

Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.

Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine... (read more)

0bogdanb
I’m sorry, how is that relevant to my no-network-adapter comment? (I mean this literally, not rhetorically. I don’t see the connection. Did you mean to reply to a different comment?)

Okay, it's not. But I'm sure there's a way to circumvent the spirit of your rule, while still abiding the letter. What about network I/O, for instance? As in, download some code from some remote location, and execute that? Or even worse, run your code in the remote location, where you can enjoy superior computing power?

0bogdanb
Yes, but then you’re in a very bad position if the test is run without network access. (I.e., you’re allowed to use the network, but there’s no network adapter.)
1AlexMennen
Good point. File IO was too specific.

More generally, the set of legal programs doesn't seem clearly defined. If it were me, I would be tempted to only accept externally pure functions, and to precisely define what parts of the standard library are allowed. Then I would enforce this rule by modifying the global environment such that any disallowed behaviour would result in an exception being thrown, resulting in an "other" result.

But it's not me. So, what exactly will be allowed?

darius110

If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?

2AlexMennen
I haven't forbade use of any library functions except for file IO. I'm not confident this is optimal, but how is it underspecified?

Hmm, leaving everything and everyone behind, and a general feeling of uncertainty: what live will be like? Will I find a job? Will I enjoy my job (super-important)? How will this affect my relationship with my SO? Less critically, should I bring my Cello, or should I buy another one? What about the rest of my stuff?

We're not talking moving a couple hundred miles here. I've done it for a year and, I could see my family every 3 week-ends, and my SO twice as much. Living in Toulouse, France, I could even push to England if I had a good opportunity. But to go... (read more)

1John_Maxwell
Relevant HN thread. Both the SF startups I've worked for have/had free meals, flexible work hours, on-premise fun like climbing walls, table tennis, foosball, etc., egalitarian laid-back work environments, and so on. In terms of technology you're working with, I'd guess that you're probably more likely to work with something relatively newer and sexier like Hadoop, Ruby on Rails, or node.js here in SF than something like Java. I don't know what you work with in France. In terms of whether the work is interesting... well, that depends on the startup. That's a tougher one... supposedly the dating scene is relatively bad for men in SF, but I only just moved here so I don't have much firsthand experience. I don't know what your SO's visa options would be. I assume she's not a programmer? If she is, maybe she could apply for a visa too? I don't know how you guys feel about gaming the US visa system by getting married? Figure out how much it's worth to you and how long you'd have to work here in order to buy equivalents for all of it or things that made you equivalently happy with your extra salary? Do you have any interest in effective altruism? Well, you can certainly postpone it until we learn what kind of immigration reform, if any, passes. Even then, I think it would only start to take effect at the start of 2014 (but I really have no clue).

(Yep, I'm loup-vaillant on HN too)

Thank you, I'll think about it. Though for now, seriously considering moving to the US tends to trigger my Ugh shields. I'm quite scared.

0John_Maxwell
Don't feel bad, according to my models, that's how most people would react (I've tried to train myself out of this sort reaction with some success mainly because I used to be really interested in starting companies, which requires this sort of audacious determination). You don't have to make a decision now. If I were you, I'd just let it be an option in the back of your mind for the time being until you get comfortable enough to think calmly about it.
0NancyLebovitz
Scared of?

Ah. I guess I stand corrected, then.

My guess is, they don't make so little:

First, many EU citizen tend to assume $1 is 1€ at first approximation, while currently it's more like $1.3 for 1€. Cthulhoo may have made this approximation. Second, lower salaries may be compensated by a stronger welfare system (public unemployment insurance, public health insurance, public retirement plan…). This one is pretty big: in France, these cost over 40% of what your employer has to pay. Third, major cost centres such as housing may be cheaper (I wouldn't count on that one, though).

To take an example, I live... (read more)

3Viliam_Bur
This can be true on the level of society, but on the level of individual the lower salaries for professions like programming are compensated by a stronger welfare system for everyone.
0Cthulhoo
Just to clarify: I did adjust euros to dollars in my estimation. To be more precise, I work in what is mainly a software company (though I'm not myself a programmer), and the standard net salary here is 19K€ per year which makes roughly 25$ per year. Now, of course if you're really good you can climb the ladder, and there are possible bonus if you reach outstanding results, but this requires more then the "teach yourself programming" level. From what I know, this is pretty much the standard, and a quick google search gives some confirmation of my numbers on this page: http://www.worldsalaries.org/italy.shtml. It should be noted, though, that all salaries are rescaled roughly in the same way, and the cost of living is lower, so you might need to adjust your usual perspective.

Given that your name looks familiar from Hacker News and your website suggests you like programming for its own sake, you should consider coming to Silicon Valley after the US congress finishes loosening up immigration restrictions for foreign STEM workers (which seems like it will probably happen). In the San Francisco area, $100K + stock is typical for entry-level people and good programmers in general are famously difficult to hire. Also, lots of LW peeps live here. My housemates and I ought to have a couch you can crash on while you look for a job. ... (read more)

4Douglas_Knight
The rule of thumb in the US is that the cost to the employer is twice the nominal salary, exactly what you said for France. Instead of paying so much tax, they pay for health insurance, which is probably what JamesF meant by "with benefits." In some global sense health care is twice as expensive in the US as France.

MIRI's stated goal is more meta:

The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.

They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that "being real careful" is not enough.

Are there other organizations attempting to develop AIs to control the world?

It probably doesn... (read more)

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multipl... (read more)

1elharo
One more difference: AMF's impact is very likely to be net positive for the world under all reasonable hypotheses. MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.

They're going to escape.

Education fighting an old existential risk: kids out of the box.

Good point.

I can think of two possible workarounds: they can still have fun among themselves, or they can teach their partner whenever they engage in long term relationship.

It does seem to have some effect on the performers' private life however. Here is a question from Matt Williams, answered by Courtney Taylor:

"You find it hard now, having sex with civilians¹?"

"Oh yeah, absolutely."

[1] From the rest of the interview, I gathered that "civilian" was a bit derogatory.

Just to say that doing porn may tend to raise one's expectations. Sure, they optimise for the viewer, but I'd be surprised if they didn't try and have fun along the way, just like actors in mainstream films. I'd be surprised to le... (read more)

4Desrtopa
I don't know, if a porn star finds sex with non-porn stars unsatisfying, then I'd think they wouldn't be a very good sex partner for people who prefer enthusiastic, well-satisfied partners.

Gasp, I definitely didn't read that way. Observing the sky sounded like science, and the logical puzzles sounded like math. Plus, it was already useful at the time: it helped keep track of time, predict seasons…

8NancyLebovitz
There's a bit in CS Lewis about modern people thinking of astrology and alchemy as the same sort of thing, but when they were current, astrology was a way of asserting an orderly universe while alchemy was asserting human power to make things very different.
0katydee
Quite so-- and less obvious applications are evidenced by the example of Thales.

Okay, let's try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.

The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,

  1. Omega has access to our brain schematics
  2. We don't have access to Omega's schematics. (optional)
  3. Omega has way more processing power than we d
... (read more)

Edit: this post is mostly a duplicate of this one

I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it's much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don't know about morality.)

Nevertheless, an above-average post is still evidence for an above-average poster. It's also her first post. She might very well "get better" in the future, as she put it.

Sure, I wouldn't count on it, but we still have a good reason to look forward to reading her future posts.

I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.

I like your second point even more: it's actionable. We could work on the security of personal computers.

That last one is incorrect however. The AI only have to access its object code in order to copy itself. That's something even current computer viruses can do. And we're back to boxing it.

2Broolucks
If the AI is a learning system such as a neural network, and I believe that's quite likely to be the case, there is no source/object dichotomy at all and the code may very well be unreadable outside of simple local update procedures that are completely out of the AI's control. In other words, it might be physically impossible for both the AI and ourselves to access the AI's object code -- it would be locked in a hardware box with no physical wires to probe its contents, basically. I mean, think of a physical hardware circuit implementing a kind of neuron network -- in order for the network to be "copiable", you need to be able to read the values of all neurons. However, that requires a global clock (to ensure synchronization, though AI might tolerate being a bit out of phase) and a large number of extra wires connecting each component to busses going out of the system. Of course, all that extra fluff inflates the cost of the system, makes it bigger, slower and probably less energy efficient. Since the first human-level AI won't just come out of nowhere, it will probably use off-the-shelf digital neural components, and for cost and speed reasons, these components might not actually offer any way to copy their contents. This being said, even if the AI runs on conventional hardware, locking it out of its own object code isn't exactly rocket science. The specification of some programming languages already guarantee that this cannot happen, and type/proof theory is an active research field that may very well be able to prove the conformance of implementation to specification. If the AI is a neural network emulated on conventional hardware, the risks that it can read itself without permission are basically zilch.

I think you miss the part where the team of millions continues its self-copying until it eats up every available computing power. If there's any significant computing overhang, the AI could easily seize control of way more computing power than all the human brains put together.

Also, I think you underestimate the "highly coordinated" part. Any copy of the AI will likely share the exact same goals, and the exact same beliefs. Its instances will have common knowledge of this fact. This would creates an unprecedented level of trust. (The only possibl... (read more)

1Broolucks
There are a lot of "ifs", though. * If that AI runs on expensive or specialized hardware, it can't necessarily expand much. For instance, if it runs on hardware worth millions of dollars, it can't exactly copy itself just anywhere yet. Assuming that the first AI of that level will be cutting edge research and won't be cheap, that gives a certain time window to study it safely. * The AI may be dangerous if it appeared now, but if it appears in, say, fifty years, then it will have to deal with the state of the art fifty years from now. Expanding without getting caught might be considerably more difficult then than it is now -- weak AI will be all over the place, for one. * Last, but not least, the AI must have access to its own source code in order to copy it. That's far from a given, especially if it's a neural architecture. A human-level AI would not know how it works any more than we know how we work, so if it has no read access to itself or no way to probe its own circuitry, it won't be able to copy itself at all. I doubt the first AI would actually have fine-grained access to its own inner workings, and I doubt it would have anywhere close to the amount of resources required to reverse engineer itself. Of course, that point is moot if some fool does give it access...
4Viliam_Bur
When I imagine that I could make my copy which would be identical to me, sharing my goals, able to copy its experiences back to me, and willing to die for me (something like Naruto's clones), taking over the society seems rather easy. (Assuming that no one else has this ability, and no one suspects me of having it. In real life it would probably help if all the clones looked different, but had an ability to recognize each other.) Research: For each interesting topic I could make dozen clones which would study the topic in libraries and universities, and discuss their findings with each other. I don't suppose it would make me an expert on everything, but I could get at least all the university-level education on most things. Resources: If I can make more money than I spend, and if I don't make too much copies to imbalance the economy, I can let a few dozen clones work and produce the money for the rest of them. At least in the starting phase, until my research groups discover better ways to make money. Contacts: Different clones could go to different places making useful contacts wil different kinds of people. Sometimes you find a person which can help your goals significantly. With many clones I could make contacts in many different social groups, and overcome language or religious barriers (I can have a few clones learn the language or join the religion). Multiple "first impressions": If I need a help of a given person or organization, I could in many cases gain their trust by sending multiple different clones to them, using different strategies to befriend them, until I find one that works. Taking over democratic organizations: Any organization with low barriers to entry and democratic voting can be taken over by sending enough clones there, and then voting some of the clones as new leaders. A typical non-governmental organization or even a smaller political party could be gained this way. I don't even need a majority of clones there: two potential leaders compet

At first. If the "100 slaves" AI ever gets out of the box, you can multiply the initial number by the amount of hardware it can copy itself to. It can hack computers, earn (or steal) money, buy hardware…

And suddenly we're talking about a highly coordinated team of millions.

-1V_V
That's the plot of the Terminator movies, but it doesn't seem to be a likely scenario. During their regime, the Nazis locked up, used as slave labor, and eventually killed, millions of people. Most of them were Ashkenazi Jews, perhaps the smartest of all ethnic groups, with a language difficult to comprehend to outsiders, living in close-knit communities with transnational range, and strong inter-community ties. Did they get "out of the box" and take over the Third Reich? Nope. AIs might have some advantages for being digital, but also disadvantages.

If you were to speed up a chicken brain by a factor of 10,000 you wouldn't get a super-human intelligence.

Sure, but if we assume we manage to have a human-level AI, how powerful should we expect it to be if we speed that up by a factor of 10, 100, or more?

Personally, I'm pretty sure such a thing is still powerful enough to take over the world (assuming it is the only such AI), and in any case dangerous enough to lock us all in a future we really don't want.

At that point, I don't really care if it's "superhuman" or not.

2TitaniumDragon
It won't be any smarter at all actually, it will just have more relative time. Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor - if you have to run experiments and wait for results, all you're really doing is decreasing the lag time between experiments, and even then only potentially. Its not even as good as having 100 slaves work on a project (as someone else posited) because you're really just having ONE slave work on the project for 100 days; copying them 100 times likely won't help that issue. This is one of the fundamental problems with the idea of the singularity in the first place; the truth is that designing more intelligent intelligences is probably HARDER than designing simpler ones, possibly by orders of magnitude, and it may not be scalar at all. If you look at rodent brains and human brains, there are numerous differences between them - scaling up a rodent brain to the same EQ as a human brain would NOT give you something as smart as a human, or even sapient. You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology - the higher you get, the harder it is to go further. Moreover, it isn't even clear what a "superhuman" intelligence even means. We don't even have any way of measuring intelligence absolutely that I am aware of - IQ is a statistical means, as are standardized tests. We can't say that human A is twice as smart as human B, and without such metrication it may be difficult to determine just how much smarter anything is than a human in the first place. If four geniuses can work together and get the same result as a computer which takes 1000 times as much energy to do the same task,
0V_V
As powerful as a a team of 10, 100 human slaves, or a little more, but within the same order or magnitude. 100 slaves are not going to take over the world.

Nevertheless, the lack of exposure to such attractors is quite relevant: if there was any, you'd expect some scientist to encounter it.

0Eugine_Nier
Why would one expect scientists to have encountered such attractors before even if they exist? As far as I know there hasn't been much effort to systematically search for them, and even if there has been some effort in that direction, Eliezer didn't site any.

Easy explanation for the Ellsberg Paradox: We humans treat the urn as if it was subjected to two kinds of uncertainties.

  • The first kind is which ball I will actually draw. It feels "truly random".
  • The second kind is how many red (and blue) balls there actually are. This one is not truly random.

Somehow, we prefer to chose the "truly random" option. I think I can sense why: when it's "truly random", I know no potentially hostile agent messed up with me. I mean, I could chose "red" in situation A, but then the organi... (read more)

0linas
Yes, exactly, and in our modern marketing-driven culture, one almost expects to be gamed by salesmen or sneaky game-show hosts. In this culture, its a prudent, even 'rational' response.

Explaining complexity through God suffers from various questions

Whose answers tend to just be "Poof Magic". While I do have a problem with "Poof Magic", I can't explain it away without quite deep scientific arguments. And "Poof Magic", while unsatisfactory to any properly curious mind, have no complexity problem.

Now that I think of it, I may have to qualify the argument I made above. I didn't know about Hume, so maybe the God Hypothesis wasn't so good even before Newton and Darwin after all. At least assuming the background... (read more)

Load More