That seems to be conceding the point that it has moral weight.
I teleport a hostage about to be executed to a capsule in lunar orbit. I then offer you three options: you pay me 1,000,000,000$, and I give him whatever pleasures are possible given the surroundings for a day, and then painlessly kill him; I simply kill him painlessly; I torture him for a day, and then painlessly kill him, and then pay you 1,000,000,000$.
Do you still take the money?
This strikes me as a pretty stark decision, such that I'd have a really hard time treating those who would take the money any different than I'd treat the babyeaters. It's almost exactly the same moral equation.
Last time I played, I just used pennies and nickles.
I really want to try it with a bucket of generic lego pieces some time.
It's a permanent mark that easily leads to tearing.
How... what...
People on the internet aren't from Saskatoon, that's my city!
Beetle-sized (of the beautifully blue sort), at least.
Note also that the body the mind wears apparently (according to quirrel) does have an impact on the mind.
[...] Often I find that the best way to come up with new results is to find someone who's saying something that seems clearly, manifestly wrong to me, and then try to think of counterarguments. Wrong people provide a fertile source of research ideas.
-- Scott Aaronson, Quantum Computing Since Democritus (http://www.scottaaronson.com/democritus/lec14.html)
Can you say anything more substantive than that? It's plausible given the studies mentioned in Cialdini, an example of which follows:
...Freedman and Fraser didn't stop there. They tried a slightly different procedure on another sample of homeowners. These people first received a request to sign a petition that favored "keeping California beautiful." Of course, nearly everyone signed since state beauty, like efficiency in government or sound prenatal care, is one of those issues no one opposes. After waiting about two weeks, Freedman and Eraser se
I wasn't agreeing or disagreeing with the substance of the linked abstract -- I only meant to say that it probably didn't belong in this thread because it looked more like a link to research than what usually goes in the 'Rationality Quotes' thread.
I think you're leaving out another possibility: that they actually think they're right. This obviously doesn't apply to all cases, but I do think it's more common than you would think.
There's also a (related?) strong desire for consistency, which is explored in "Influence - Science and Practice" (Cialdini), which I found sheds some new light on the material in "How to win friends and influence people".
[Also, welcome to lesswrong]
And this sums up why I feel that respect for the silly beliefs of others is important: it sets the stage for the acceptable treatment of things that are confusing or silly.
It's not that you take the belief seriously, but rather that you take seriously the epistemic position that makes that belief seem sensible.
Oddly enough, the first song to come to mind when you said that was the chicken dance.
We're really good at this sort of group coordination: -20 karma for sure :)
They are rational to the extent they are interested and successful at achieving their goals.
Since so many poker opponents often decide at whim, we need to do more than just strategically analyze their actions relative to what they should be doing. We need to watch and listen and determine what they are doing.
--Mike Caro, Caro's Book of Tells
That's my bet: Harry doesn't believe in souls, but he swallows the explanation without a second thought.
Harry has already been forbidden from leaving the Hogwarts wards without sufficient cause and escort by this time; lunch with Quirrell was explicitly included in this ban. That's not far from what he'd say as an innocent.
Make sure you're logged out first, otherwise your search results are tuned according to your search history.
http://demented.no-ip.org/~feep/rss_proxy.cgi?5782108
You can also get email alerts of new chapters, directly from fanfiction.net
And note the hat's commentary on the matter.
It doesn't feel very fundamental. How commonly they crop up, and how easy they are to debug have much to do with your editor, coding style and interpreter/compiler.
I took the liberty of mucking up the spreadsheet a little bit:
Once more people have filled in their preferred times, it might make sense to re-sort by that.
I'm in; Saskatoon, Canada.
I think the "it's bigger on the inside" phenomenon is a better foundation to build such a spell on.
Beware Canadians seeking paperclips.
On further consideration, my complaint wasn't my real/best argument, consider this a redirect to rwallace's response above :p
That said, I personally don't take 'many' as meaning 'most', but more in the sense of "a significant fraction", which may be as little as 1/5 and as much as 4/5. I'd be somewhat surprised if the number of old machines (5+ years old) in use wasn't in that range.
re: scaling, the Ubuntu folding team's wiki describes the approach.
Many != all.
My desktop is old enough that it uses very little more power at full capacity than it does at idle.
Additionally, you can configure (may be the default, not sure) the client to not increase the clock rate.
I use the origami client manager thingie; it handles deploying the folding client, and gives a nice progress meter. The 'normal' clients should have similar information available (I'd expect that origami is just polling the clients themselves).
Granted that in many cases, it's donating money that you were otherwise going to burn.
Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.
After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p
Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.
Fair point, but the assumption that it indeed is possible to verify source code is far from proven. There's too many unknowns in cryptography to make strong claims as to what strategies are possible, let alone which would be successful.
Conditional on one site or the other going down, the second instance adds little buffer.
An ufai would simply focus its efforts on pieces of code likely to be common between the two sites, ensuring that it can take both down at the same cost. This also assumes that developing such an attack is costly, which it may not be: I would expect a sensory modality for code to reduce our commonly made coding blunders to the level of "my coffee cup is leaking because there's a second hole at the bottom".
Within the confines of the story:
No star that has been visited by starline has ever been seen from another, which implies a vastly larger universe than can be seen from a given lightcone. Basically, granting the slightly cryptographic assumption that travel between stars is impossible.
The weapon is truly effective: works as advertised.
Any disagreement with that would have to say why """ 'Assume there is no god, then...' "But there is a god!" """ fallacy doesn't apply here.
The threat of a nova feels like a more interesting avenue than the mere detonation.
I think this might have been intended more in the purple dragon sense than anything: focus on how they know exactly what experimental results they'll need to explain, and what that implies about their gut-level beliefs.