I have defeated the hydra! (I had to cut off 670 heads). Feels like playing Diablo.
But when you think of it, if you assume the centaur Firenze wasn't dead, Imperius is probably not the best option anyway
I took the survey (answered nearly everything).
(7): indentation error. But I guess the interpreter will tell you i
is used out of scope. That, or you would have gotten another catastrophic result on numbers below 10.
def is_prime(n):
for i in range(2,n):
if n%i == 0: return False
return True
(Edit: okay, that was LessWrong screwing up leading spaces. We can cheat that with unbreakable spaces.)
I don't like your use of the word "probability". Sometimes, you use it to describe subjective probabilities, but sometimes you use it to describe the frequency properties of putting a coin in a given box.
When you say, "The brown box has 45 holes open, so it has probability p=0.45 of returning two coins." you are really saying that knowing that I have the brown box in front of me, and I put a coin in it, I would assign a 0.45 probability of that coin yielding 2 coins. And, as far as I know, the coin tosses are all independent: no amount ...
It just occurred to me that we may be able to avoid the word "intelligence" entirely in the title. I was thinking of Cory Doctorrow on the coming war on general computation, where he explain unwanted behaviour on general purpose computers is basically impossible to stop. So:
Current computers are fully general hardware. An AI would be fully general software. We could also talk about general purpose computers vs general purpose programs.
The Idea is, many people already understand some risks associated with general purpose computers (if only for the...
Or, "Artificial intelligence as a risk to mankind". (Without the emphasis.)
Good luck finding one that doesn't also bias you into a corner.
Maybe we could explain it by magical risks, and violence. I wouldn't be surprised if wizard kill each other more than muggles. With old-fashioned manners, may come old fashioned violence. The last two wars (Grindelwald and Voldemort), were awfully close, and it looks like the next one is coming.
If all times and all countries are the same, with a major conflict every other generation, it could easily explain such a low population.
Thus it had been with some trepidation that Mr. and Mrs. Davis had insisted on an audience with Deputy Headmistress McGonagall. It was hard to muster a proper sense of indignation when you were confronting the same dignified witch who, twelve years and four months earlier, had given both of you two weeks' detention after catching you in the act of conceiving Tracey.
Apparently, contraception isn't always used 7th year students. I count that as mild evidence that contraception, magical or otherwise, isn't widespread in the magical world. Method...
War. With children.
I fear the consequences if we don't solve this.
Edit: I'm serious:
This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle
I don't see Hermione be revived any time soon, for both story reasons and because Harry is unlikely to unravel the secrets of soul magic in mere hours, even with a time loop at his disposal.
More likely, Harry has found a reliable way to suspend her, and that would be the "he has already succeeded" you speak of.
The key part is that some of those formal verification processes involve automated proof generation. This is exactly what Jonah is talking about:
I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem).
Those who make (semi-)automated proof for a living have a vested interest in making such things as useful as possible. Among other things, this means as automated as possible, and as general as possible. They're not there yet, but they're definitely working on it.
The Prover company is working on the safety of train signalling software. Basically, they seek to prove that a given program is "safe" along a number of formal criteria. It involves the translation of the program in some (boolean based) standard form, which is then analysed.
The formal criteria are chosen manually, but the proofs are found completely automatically.
Despite the sizeable length of the proofs, combinatorial explosion is generally avoided, because programs written by humans (and therefore their standard form translation) tend to have s...
I do not lie to my readers
Eliezer
I think the facts at least are as described. Hermione is certainly lying in a pool of blood, something significant did happen to her (Harry felt the magic), and Dumbeldore definitely believe Hermione is dead.
If there is a time turner involved, it won't change those perceptions one bit, And I doubt Dumbeldore would try too Mess With Time ever again (as mentioned in the Azkaban arc). Harry might, but he's out of his Time Turner Authorized Range. Even then, it looks like he's thinking longer term than that.
Recalling a video I have seen (forgot the source), the actual damage wouldn't occur upon hypoxia, but upon re-oxygenation. Lack of oxygen at the cellular level does start a fatal chemical reaction, but the structure of the cells are largely preserved. But when you put oxygen back, everything blows up (or swells up, actually).
Harry may very well have killed Hermione with his oxygen shot. If he froze her before then, it might have worked, but after that… her information might be lost.
One obvious objection: Hermione was still concious enough to say some last ...
Wizards have souls. - their minds are running on more than just wetware. I am fairly certain of this, because otherwise shape shifting would be instantly fatal.
Furthermore, a "continuous" function could very well contain a finite amount of information, provided it's frequency range is limited. But then, it wouldn't be "actually" continuous.
I just didn't want to complicate things by mentioning Shannon.
I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs.
This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.
Here's my guess:
By the way, why posts aren't written like comments, in Markdown format? Could we consider adding markdown formatting as an option?
I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.
Here's a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the follo...
We have to determine what counts as "unfair". Newcomb's problem looks unfair because your decision seems to change the past. I have seen another Newcomb-like problem that was (I believe) genuinely unfair, because depending on their decision theory, the agents were not in the same epistemic state.
Here what I think is a "fair" problem. It's when
I think it is possible to prove that a given boxing works, if it's sufficiently simple. Choosing the language isn't enough, but choosing the interpreter should be.
Take Brainfuck for instance: replace the dot ('.
'), which prints a character, by two other statements: one that prints "yes" and exits, and one that prints "no" and exits. If the interpreter has no bug, a program can only:
Assuming the AI doesn't cont...
It's the whole thread. I was not sure where to place my comment. The connection is, the network may not be the only source of "cheating". My solutions plug them all in one fell swoop.
Well, I just though about it for 2 seconds. I tend to be a purist: if it were me, I would start from pure call-by-need λ-calculus, and limit the number of β-reductions, instead of the number of seconds. Cooperation and defection would be represented by Church booleans. From there, I could extend the language (explicit bindings, fast arithmetic…), provide a standard library, including some functions specific to this contest.
Or, I would start from the smallest possible subset of Scheme that can implement a meta-circular evaluator. It may be easier to examine...
Okay, it's not. But I'm sure there's a way to circumvent the spirit of your rule, while still abiding the letter. What about network I/O, for instance? As in, download some code from some remote location, and execute that? Or even worse, run your code in the remote location, where you can enjoy superior computing power?
More generally, the set of legal programs doesn't seem clearly defined. If it were me, I would be tempted to only accept externally pure functions, and to precisely define what parts of the standard library are allowed. Then I would enforce this rule by modifying the global environment such that any disallowed behaviour would result in an exception being thrown, resulting in an "other" result.
But it's not me. So, what exactly will be allowed?
If you'd rather run with a very small and well-defined Scheme dialect meant just for this problem, see my reply to Eliezer proposing this kind of tournament. I made up a restricted language since Racket's zillion features would get in the way of interesting source-code analyses. Maybe they'll make the game more interesting in other ways?
Hmm, leaving everything and everyone behind, and a general feeling of uncertainty: what live will be like? Will I find a job? Will I enjoy my job (super-important)? How will this affect my relationship with my SO? Less critically, should I bring my Cello, or should I buy another one? What about the rest of my stuff?
We're not talking moving a couple hundred miles here. I've done it for a year and, I could see my family every 3 week-ends, and my SO twice as much. Living in Toulouse, France, I could even push to England if I had a good opportunity. But to go...
(Yep, I'm loup-vaillant on HN too)
Thank you, I'll think about it. Though for now, seriously considering moving to the US tends to trigger my Ugh shields. I'm quite scared.
Ah. I guess I stand corrected, then.
My guess is, they don't make so little:
First, many EU citizen tend to assume $1 is 1€ at first approximation, while currently it's more like $1.3 for 1€. Cthulhoo may have made this approximation. Second, lower salaries may be compensated by a stronger welfare system (public unemployment insurance, public health insurance, public retirement plan…). This one is pretty big: in France, these cost over 40% of what your employer has to pay. Third, major cost centres such as housing may be cheaper (I wouldn't count on that one, though).
To take an example, I live...
Given that your name looks familiar from Hacker News and your website suggests you like programming for its own sake, you should consider coming to Silicon Valley after the US congress finishes loosening up immigration restrictions for foreign STEM workers (which seems like it will probably happen). In the San Francisco area, $100K + stock is typical for entry-level people and good programmers in general are famously difficult to hire. Also, lots of LW peeps live here. My housemates and I ought to have a couch you can crash on while you look for a job. ...
MIRI's stated goal is more meta:
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that "being real careful" is not enough.
Are there other organizations attempting to develop AIs to control the world?
It probably doesn...
If I may list some differences I perceive between AMF and MIRI:
Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multipl...
They're going to escape.
Education fighting an old existential risk: kids out of the box.
I'll be there too.
Good point.
I can think of two possible workarounds: they can still have fun among themselves, or they can teach their partner whenever they engage in long term relationship.
It does seem to have some effect on the performers' private life however. Here is a question from Matt Williams, answered by Courtney Taylor:
"You find it hard now, having sex with civilians¹?"
"Oh yeah, absolutely."
[1] From the rest of the interview, I gathered that "civilian" was a bit derogatory.
Just to say that doing porn may tend to raise one's expectations. Sure, they optimise for the viewer, but I'd be surprised if they didn't try and have fun along the way, just like actors in mainstream films. I'd be surprised to le...
Gasp, I definitely didn't read that way. Observing the sky sounded like science, and the logical puzzles sounded like math. Plus, it was already useful at the time: it helped keep track of time, predict seasons…
Okay, let's try and defeat Omega. The goal is to do better than Eliezer Yudkowsky, which seems to be trustworthy about doing what he publicly says all over the place. Omega will definitely predict that Eliezer will one-box, and Eliezer will get the million.
The only way to do better is to two-box while making Omega believe that we will one-box, so we can get the $1001000 with more than 99.9% certainty. And of course,
Edit: this post is mostly a duplicate of this one
I would guess that those particular fields look more interesting when you make the wrong assumptions to begin with. I mean, it's much less interesting to talk about God when you accept there is none. Or to talk about metaphysics, when you accept that the answer will most likely come from physics. (I don't know about morality.)
Nevertheless, an above-average post is still evidence for an above-average poster. It's also her first post. She might very well "get better" in the future, as she put it.
Sure, I wouldn't count on it, but we still have a good reason to look forward to reading her future posts.
I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.
I like your second point even more: it's actionable. We could work on the security of personal computers.
That last one is incorrect however. The AI only have to access its object code in order to copy itself. That's something even current computer viruses can do. And we're back to boxing it.
I think you miss the part where the team of millions continues its self-copying until it eats up every available computing power. If there's any significant computing overhang, the AI could easily seize control of way more computing power than all the human brains put together.
Also, I think you underestimate the "highly coordinated" part. Any copy of the AI will likely share the exact same goals, and the exact same beliefs. Its instances will have common knowledge of this fact. This would creates an unprecedented level of trust. (The only possibl...
At first. If the "100 slaves" AI ever gets out of the box, you can multiply the initial number by the amount of hardware it can copy itself to. It can hack computers, earn (or steal) money, buy hardware…
And suddenly we're talking about a highly coordinated team of millions.
If you were to speed up a chicken brain by a factor of 10,000 you wouldn't get a super-human intelligence.
Sure, but if we assume we manage to have a human-level AI, how powerful should we expect it to be if we speed that up by a factor of 10, 100, or more?
Personally, I'm pretty sure such a thing is still powerful enough to take over the world (assuming it is the only such AI), and in any case dangerous enough to lock us all in a future we really don't want.
At that point, I don't really care if it's "superhuman" or not.
Nevertheless, the lack of exposure to such attractors is quite relevant: if there was any, you'd expect some scientist to encounter it.
Easy explanation for the Ellsberg Paradox: We humans treat the urn as if it was subjected to two kinds of uncertainties.
Somehow, we prefer to chose the "truly random" option. I think I can sense why: when it's "truly random", I know no potentially hostile agent messed up with me. I mean, I could chose "red" in situation A, but then the organi...
Explaining complexity through God suffers from various questions
Whose answers tend to just be "Poof Magic". While I do have a problem with "Poof Magic", I can't explain it away without quite deep scientific arguments. And "Poof Magic", while unsatisfactory to any properly curious mind, have no complexity problem.
Now that I think of it, I may have to qualify the argument I made above. I didn't know about Hume, so maybe the God Hypothesis wasn't so good even before Newton and Darwin after all. At least assuming the background...
I, on the other hand love my cello. I also happen to enjoy practice itself. This helps a lot.