Speaking of copies, I keep meaning to write a LOCKSS plugin for LessWrong. This comment will be my note-to-self, or anyone else who wants to do it first.
(interested in hearing how other donors frame allocation between SI and CFAR)
I still only donate to SI. It's great that we can supposedly aim the money at FAI now, due to the pivot towards research.
But I would also love to see EY's appeal to MoR readers succeed:
...I don’t work for the Center for Applied Rationality and they don’t pay me, but their work is sufficiently important that the Singularity Institute (which does pay me) has allowed me to offer to work on Methods full-time until the story is finished if HPMOR readers donate a total of $1M to CFAR
And after skimming the paper, the only thing I could find in response to your point is:
...Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress. Some behaviors such as timed responses to stimuli may detectably change when the user is un
Of course, such changes could also be caused by being stressed in general. Even if you could calibrate your model to separate the effects of "being under duress" from "being generally stressed" in a particular subject, I would presume that there's too much variability in people that you could do this reliably for everyone.
Imagine how people would react to an ATM that gave them their money whenever they wanted it - except when they were in a big hurry and really needed the cash now.
Nice!
From FAQ #5:
Independent researchers associated with the Singularity Institute: Daniel Dewey, Kaj Sotala, Peter de Blanc, Joshua Fox, Steve Rayhawk, and others
Would it be feasible to make this list exhaustive so that you can delete the "and others"? I think the "and others" makes the site seem less prestigious.
Good question. And for people who missed it, this refers to money that was reported stolen on SI's tax documents a few years ago. (relevant thread)
Of the things on your list, I'm most surprised by cognitive science and maybe game theory, unless you're talking about the fields' current insights rather than their expected future insights. In that case, I'm still somewhat surprised game theory is on this list. I'd love to learn what led you to this belief.
It's possible I only know the basics, so feel free to say "read more about what the fields actually offer and it'll be obvious if you've been on Less Wrong long enough."
I know the pain of being someone who has had sex before, and then being reminded of how awesome sex is without having an outlet for it at the time, and having it leave me feeling unbelievably miserable. I didn't want to leave even a single person reading my article in a place like that.
This thought is very much appreciated.
If you're interested in how your body works, I recommend Gerald Cizadlo's lectures. They are biology classes for nursing students at an American religious college. Because of his pathophysiology and physiology podcasts, I'm now able to explain the way nerves transmit signals (for example).
(Edited; I originally called nerves insane.)
Nice! And for anyone freaked out by the "current balance of my bank account" part, there's an explanation here.
Is the Singularity Institute supporting her through your salary?
I hope you're not too put out by the rudeness of this question. I've decided that I'm allowed to ask because I'm a (small) donor. I doubt your answer will jeopardize my future donations, whatever it is, but I do have preferences about this.
(Also, it's very good to hear that you're taking health seriously! Not that I expected otherwise.)
I suspect that value systems that simply seek to minimize pain are poor value systems.
Fair enough, as long as you're not presupposing that our value systems -- which are probably better than "minimize pain" -- are unlikely to have strong anti-torture preferences.
As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It's at least obvious that anti-em-torture is harder to enforce, but are you thinking it's also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?
Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable.
Your conjecture seems to follow from Rice's theorem, assuming the personhood of a running computation is a property of the partial function its algorithm computes. Also, I think you can prove your conjecture by taking a certain proof that the Halting Problem is undecidable and replacing 'halts' with 'is conscious'. I can track this down if you're sti...
I think you're right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.
To partially answer your question ("how much effort is it worth to prevent the torture of ems?"): I sure do want torture to not happen, unless I'm hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of eff...
Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?
Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?
Is it easy to accidentally come up with criteria for "locally correct" that will still let us construct globally wrong results?
This comment was brought to you by a surface analogy with the Penrose triangle.
This makes me happy. Now, here's a question that is probably answered in the technical paper, but I don't have time to read it:
"New coins are generated by a network node each time it finds the solution to a certain calculational problem." What is this calculational problem? Could it easily serve some sinister purpose?
I finally remembered to post this here
Good timing, though: now this is fresh in our minds during the challenge.
Oh, oops, we were talking about different things. I think you're right to mention matching donations (especially after hearing your anecdote), but I wonder if there's room for a warning like, "It's more important to pick the right charity than to get someone to match your donation. (Do both if you can, of course.)"
Thank you for this post! One thing:
- Look into matching donations - If you’re gonna give money to charity anyway, you should see if you can get your employer to match your gift. Thousands of employers will match donations to qualified non-profits. When you get free money -- you should take it.
If GiveWell's cost-benefit calculations are remotely right, you should downplay matching donations even more than just making this item second-last. I fear that matching donations are so easy to think about that they will distract people from picking good cha...
I see no reason to disagree with you. (By the way, the other time I did this was non-backwards.)
A note for potential matchers: if you match this donation, you'll make me more likely to donate in the future. (I'll be like, "Not only would I be helping out, but I could probably get someone to match this donation as well. ") I was relying on this being obvious.
Candidate: Hold off on proposing solutions.
This article is way more useful than the slogan alone, and it's short enough to read in five minutes.
You changed my mind. I'm worried my candidate will hurt more than it helps because people will conflate "bad idea generators" with "disreputable idea generators" -- they might think, "that idea came to me in my sleep, so I guess that means I'm supposed to ignore it."
A partially-fixed candidate: If an idea was generated by a clearly bad method, the idea is probably bad.
Thank you very much. I matched it.
I honestly wouldn't be able to tell if you faked your confirmation e-mail, unless there's some way for random people to verify PayPal receipt numbers. So don't worry about the screenshot. Hopefully I'll figure out some convenient authentication method that works for the six donations in this scheme.
Candidate: Don't pursue an idea unless it came to your attention by a method that actually finds good ideas. (Paraphrased from here.)
Since this site has such a high sanity waterline, I'd like to see comments about important topics even if they aren't directly rationality-related. Has anyone figured out a way to satisfy both me and RobinZ without making this site any less convenient to contribute to?
(Upvoted for explaining your objection.)
I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:
...In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to dona
I'm guessing RobertLumley's "Why would you downvote a meetup post?" caused people to upvote. I know I like to upvote when someone points out unnecessary-seeming downvotes.