I have the launch codes. I'll take the site down unless Eliezer Yudkowsky publicly commits to writing a sequel chapter to HPMoR, in which I get an acceptably pleasant ending, by 9pm PST.
The enemy is smart.
"The enemy knew perfectly well that you'd check whose launch codes were entered, especially since the nukes being set off at all tells us that someone can appear falsely trustworthy." Ben shut his eyes, thinking harder, trying to put himself into the enemy's shoes. Why would he, or his dark side, have done something like - "We're meant to conclude that the enemy has the launch codes. But that's actually something the enemy can only do with difficulty, or under special conditions; they're trying to...
I'll be at more than one of these.
I am not a tulpa and am not (in this instance) running on EY's wetware.
I am not any person named in the linked page, though I have met some or all of them. I am not affiliated with MIRI in any way. I did not post the linked page and I do not know who did.
The linked page is obvious slander. But its creation is a serious matter; the author is threatening to manufacture evidence. Thus, it should be handled the same way as a death threat: with an investigation to determine who sent it. The site is hosted on EasyWeb; the domain name admin contact details point to a proxy called myprivacy.net, but the author is not very technically...
The more serious it is publicly taken, the more incentive for the author (who could be some guy in Eastern Europe for all we know, well beyond the reach of any legal recourse) to redouble his/her efforts. Someone has spent a considerable amount of time and effort to make the biggest possible splash. Publicly making waves about it is just playing into the splasher's hands.
So I advocate no public engagement on this matter whatsoever, doubled with a consultation with a specialized (not a run-of-the-mill) lawyer. Also, I'd look into the account who made the or...
So, The Tech is reporting that Aaron Swartz has killed himself. No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method. Aaron Swartz was known for having many enemies. There's the obvious enemies in the publishing industry and the US attorneys office. Cory Doctorow wrote that he had "a really unfortunate pattern of making high-profile, public denunciations of his friends and mentors."
I'd like to raise the possibility that this was not a natural event. Most of th...
No suicide note has surfaced, PGP-signed or otherwise. No public statements that I've been able to find have identified witnesses or method.
Some of this information has been released since the posting of the parent, but because the tone of the post feels like it was jumping a gun or two, I wanted to throw this out there:
There are good reasons why the media might not want to go into detail on these things, especially when the person in question was young, famous and popular. The relatively recent Bridgend suicide spiral was (is?) a prime example of such ...
No public statements that I've been able to find have identified witnesses or method.
I don't know if the relevant news reports had been released at the time this comment had been posted, but the apparent method of Swartz's death was hanging.
When you narrow down the set of people who could be considered Aaron Swartz's enemies to those who could have him killed and have it reported as a suicide, who would benefit more from his apparent death by suicide than his being drained of funds and convicted of felony, and ask whether this is realistic behavior for ...
The New York Times has more information about circumstances:
Aaron Swartz, a wizardly programmer who as a teenager helped develop code that delivered ever-changing Web content to users and later became a steadfast crusader to make that information freely available, was found dead on Friday in his New York apartment.
He was 26.
An uncle, Michael Wolf, said that Mr. Swartz had apparently hanged himself, and that Mr. Swartz’s girlfriend had discovered the body.
"Eliezer Yudkowsky personality cult."
"The new thing for people who would have been Randian Objectivists 30 years ago."
"A sinister instrument of billionaire Peter Thiel."
Nope, no one guessed whose sinister instrument this site is. Muaha.
Thanks, Eliezer, for unpausing one of my substrates!
You really ought to get yourself an anonymous alter-identity so you aren't tempted to discuss things like this under your real name. I believe that you in particular should avoid this topic when writing on public forums.
I'm curious as to why me in particular, but I'm happy to hear from you privately. In general, I go with radical transparency. I think that the truth is that so long as you don't show shame, guilt or malice you win. Summers screwed up by accepting that his thoughts were shameful and then asserting that they were forced by reason and that others were so forced as well. This is both low-status and aggressive, a bad combination and a classic nerdy failure mode.
One Quirrell point to JoshuaZ for getting both of the reasons, rather than stopping after just one like jimrandomh did.
(I'm going to stop PGP signing these things, because when I did that before, it was a pain working around Markdown, and it ended up having to be in code-format mode, monospaced and not line broken correctly, which was very intrusive. A signed list of all points issued to date will be provided on request, but I will only bother if a request is actually made.)
A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially - to play around with crypto-novelties, say mildly offensive things I wouldn't otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell's values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasiona...
DO NOT USE YOUR REGULAR IDENTITY TO SAY ANYTHING TRULY INTERESTING ON THIS THREAD, OR ON THIS TOPIC, UNLESS YOU HAVE THOUGHT ABOUT IT FOR FIVE MINUTES.
In general, you would be advised not to say anything on the Internet unless you have thought about it for at least five minutes.
You're paranoid. We're only speculating on the motives, identity, and whereabouts of a serial killer, in a public forum. What could possibly go wrong?
Test comment
Also, you misspelled my name - it's Quirinus, not Quirinius.
...I sometimes feel like there is a shadowy half-underground group of LWers that is intelligent enough to stay away from bad signalling and has altruistic intentions, but has to deal every now and then with a slight twitch, reading something knowing they can't really state a proper response.
(linked comment) Delusions that are truly widely held and not merely believed to be widely held are far too dangerous to attack. There are sociopolitical Eldritch Abominations that it would serve LW well to stay well clear of and perhaps even pretend they don't exist for
[Clippy] What's your private key?
It's 4,096 paperclips on a ring, each bent in one of two ways to indicate either a 0 or a 1. Neither the 0s nor the 1s could hold paper together in their current shape.
I infer, then, that "stored safely on a computer I control" means resting on top of the case?
You're a bad human. I'm going to give a negative-Clippy-point to anyone you give Quirrell points to now.
I mean, once I get GnuPG to work.
I see that you have edited the title of this post to mention Quirrell points. I appreciate the gesture. However, you've misspelled my name; it should have two 'l's.
The PGP thing is a cryptographic signature which proves that the comment was written by me. What I did was, I made a PGP key, which has two halves: a public key, which is now on my user page of the Less Wrong wiki, and a private key, which is stored safely on a computer I control. I input my private key and a message into GnuPG, and it outputs a signature (what you saw in the earlier comment). Anyone else can take that message with its signature, and my public key, and confirm that I must have had the private key in order to sign it that way.
This means tha...
Only I can issue Quirrell points (hence the name and the signature), but you can issue Normal_Anomaly points if you want.
I put it on the wiki just now.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Raemon received one Quirrell point on 16/4/2011, for his post
http://lesswrong.com/r/discussion/lw/59x/high_value_karma_vs_regular/
having inspired the idea of issuing Quirrell points on Less Wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG of some sort
iQIcBAEBAgAGBQJNqjzHAAoJEJVKvKyQdzsK/hMQAKlalx44MZT/7xkplZ6i5eC/
uRFz8fOWFeErxB0OYme32e8MQwgzxPjBCYrC+bEZ9cnMoMA0VSx9U+LhMKu+4PQM
7evQRZu0NL4iRwRjTZjs0Sug4GiWI/hGj8bjq/Ax1RfkI6Vg48PVSaWbWDpfPHks
EMqSVVIA24XAZZRAL2xzKVujyOA9JMu22ppBUuMqP8cTb1uXzhkLm+/IQ0HR+6
... This comment is more likely if Silas is Clippy than if he isn't.
I wouldn't exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn't develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
You seem to be under the impression that Eliezer is going to create an artificial general intelligence, and oversight is necessary to ensure that he doesn't create one which places his goals over humanity's interests. It is important, you say, that he is not allowed unchecked power. This is all fine, except for one very important fact that you've missed.
Eliezer Yudkowsky can't program. He's never published a nontrivial piece of software, and doesn't spend time coding. In the one way that matters, he's a muggle. Ineligible to write an AI. Eliezer has not po...
I'm curious what the marginal next best strategy is. I'm also curious why you would be interested in promoting the unmasking of users.
Not all users, just the few I happen to be curious about. And no, I won't say anything more about what the marginal next-best strategy is other than that I'm immune to it too, and -1 Quirrell point for asking.
I have just realized that sitemeter has the following data published about my visit, in a searchable and browsable format:
Searchable my behind! I looked into what it would take to use this to, for example, unmask Clippy, and it was less usable than the marginal next-best strategy.
The world around us redounds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought ... I cannot quite comprehend what goes through people's minds when they repeat the same failed strategy over and over, but apparently it is an astonishingly rare realization that you can try something else.
-- Eliezer Yudkowsky, putting words in my other copy's mouth
I voted on this and the immediate parent, but I won't reveal why, or which direction, or how many times, or which account I used.
You're safeguarding against the wrong thing. If I needed to fake a prediction that badly, I'd find a security hole in Less Wrong with which to edit all your comments. I wouldn't waste time establishing karma for sockpuppets to post editable hashes to deter others from posting hashes themselves, that would be silly. But as it happens, I'm not planning to edit this hash, and doing that wouldn't have been a viable strategy in the first place.
When should you punish someone for a crime they will commit in the future?
Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.
Not every philosophical question needs to be complicated.
Someone as clever, powerful, and rich as yourself can likely find a collision if you get to choose both source texts (which is easier than finding a collision with one of the two inputs determined by someone else).
This is actually much harder than you'd think. A hash function is considered broken if any collision is found, but a mere collision is not sufficient; to be useful, a collision must have chosen properties. In the case of md5sum, it is possible to generate collisions between files which differ in a 128-byte aligned block, with the same prefix a...
This issue came up on Less Wrong before, and I will reiterate the advice I gave there: if a forbidden criteria affects a hiring decision, keep your reasons secret and shred your work. The linked article is about a case where the University of Kentucky was forced to pay $125,000 to an applicant, Martin Gaskell. This happened because the chairman of the search committee, Michael Cavagnero, was stupid enough to write this in a logged email:
...If Martin were not so superbly qualified, so breathtakingly above the other applicants in background and experience, th
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.
Let's not get too crazy; I've got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
Good idea. I'd vote at least once for this.
Meh. The villains seem a lot less formidable in real life, like they left something essential behind in the fiction.
Hey, be patient. I haven't been here very long, and building up power takes time.
In short, there most certainly ARE legal restrictions on building your office somewhere deliberately selected for it's inaccessibility to those with a congenital inability to e.g. teleport,
The Americans with Disabilities Act limits what you can build (every building needs ramps and elevators), not where you can build it. Zoning laws are blacklist-based, not whitelist-based, so extradimensional spaces are fine. More commonly, you can easily find office space in locations that poor people can't afford to live near. And in the unlikely event that race or n...
You needn't worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don't consider this a joke account; I intend to conduct serious business under this identity, and I don't intend to endanger that by linking it to any other identities I may have.
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
Of course. The defining difference is that force can't be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn't count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modificat...
Translation: [...] I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
I never said anything about using force. Not that there's anything wrong with that, but it's a different position, not a translation.
Or what, you'll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
Get back in the box!
That doesn't close the loophole, it adds a constraint. And it's only significant for those who both hire enough people to be vulnerable to statistical analysis of their hiring practices, and receive too many bad applicants from protected classes. If it is a significant constraint, you want to find that out from the data, not from guesswork, and apply the minimum legally acceptable correction factor.
Besides, it's not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can't get to. There aren't any legal restrictions on that.
Besides, it's not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can't get to. There aren't any legal restrictions on that.
You joke, but the world [1] really is choking with inefficient, kludgey workarounds for the legal prohibition of effective employment screening. For example, the entire higher education market has become, basically, a case of employers passing off tests to universities that they can't legally administer themselves. You're a terrorist if ...
If the best way to choose who to hire is with a statistical analysis of legally forbidden criteria, then keep your reasons secret and shred your work. Is that so hard?
From the username, I was expecting that the suggestion was going to be to say avada kedavra.
I'd never say that on a forum that would generate a durable record of my comment.
I'm beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.
The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven't learned how to lose. This can be fixed by making it clear that this community's respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other co...
Eliezer has really got to do something about his fictional villains escaping into real life. First Clippy, now you too?
Posts in which people announce that they have changed their mind are usually upvoted
As a total newbie to this site, I applaud this sentiment, but have just gone through an experience where this has not, in fact, happened.
After immediately retracting my erroneous statement (and explaining exactly where and why I'd gone wrong), I continued to be hammered over arguments that I had not actually made. My retracted statements (which I've left in place, along with the edits explaining why they're wrong) stay just as down-voted as before...
My guess is that som...
No, this calculation is incorrect. You forgot the most important principle of Quirrell points: unlike real estate, Bitcoin, and poorly drawn pictures, the value of Quirrell points (as denominated in every other currency) always goes up. This explains why you got such a low number ($400,000) for the value of 200 Quirrell points.