That doesn't mesh with the experiments Harry and Hermione performed in chapter 22. Or at least not without a complication penalty that would make alternative explanations more plausible.
Harry can control the order of a transfiguration process, as seen in ch.104. Those are not threads floating freely in the air, they're part of a specific wire shape in the process of being transfigured. We also know that you can transfigure against tension.
I took it as a reminder of what was discussed in How to Actually Change Your Mind: confirmation bias, affective death spirals etc.
Seconded. On Android I'm using FBReader with an Ivona voice (free, with the drawback that I have to re-download Ivona every couple of months). It works really well for non-fiction, even the Sequences with all its long made-up words.
It doesn't work so well with fantasy/sci-fi though. Made-up words without an English root trip it up.
Starting from chapter 10, the protagonist dedicates herself to a single goal, and never wavers from that goal no matter what it costs her throughout countless lifetimes. She cheats with many-worlds magic, but it's a kind of magic that still requires as much hard work as the real thing.
I smiled when I realized why the answer isn't trivially "press sim", but that slight obfuscation is causing a lot of confused people to get downvoted.
If you decide not to press "sim", you know that there are no simulations. It's impossible for there to be an original who presses "sim" only for the simulations to make different decisions. You're the original and will leave with 0.9.
If you decide to press "sim", you know that there are 1000 simulations. You've only got a 1 in 1001 chance of being the original. Your expected utility for pressing the button is slightly more than 0.2.
Working on my first serious project using AndEngine (a game that's a cross between Recettear and Night Shift). The joy of puzzling code out without any documentation. I'm at the stage where I can display the shop and have customers come in and wobble around, without there being any actual gameplay.
I don't think it's a logical fallacy at all. I mean, anyone who changes their mind about cryonics because of the promise of future Margaret Atwood is probably not being very rational, but formally there's nothing wrong with that reasoning.
I'm an Atwood-reading robot. I exist only to read every Margaret Atwood novel. I expect to outlive her, so the future holds nothing of value to me. No need for cryonics. Oh but what's this? A secret Atwood novel to be released in 2114? Sign me up! I'll go back to suicidal apathy after I've read the 2114 novel.
You'd keep it in your hand and use it as an improvised hammer to carefully break yourself a big enough hole. Hopefully without collapsing the whole house.
If you're trapped in a glass house and you have a stone, throwing it is still a terrible idea.
"So? What do you think I should do?"
"Hm. I think you should start with all computable universes weighted by simplicity, disregard the ones inconsistent with your experiences, and maximize expected utility over the rest."
"That's your answer to everything!"
"Eliezer Yudkowsky Facts" as a featured article. Wow, that's certainly one way to react to this kind of criticism.
(I approve.)
Re-reading that post, I came upon this entry, which seems particularly relevant:
We're all living in a figment of Eliezer Yudkowsky's imagination, which came into existence as he started contemplating the potential consequences of deleting a certain Less Wrong post.
Assuming we can trust the veracity of this "fact", I think we have to begin to doubt Eliezer's rationality. I mean, sure, the Streisand effect is a real thing, but causing Roko's obscure thought experiment to become the subject of the #1 recently most read article on Slate, just by ...
Now imagine someone gives you a spade.
I'd probably call it unethical and try to get it banned.
Does the internet count as "the general population"? If so: identifying and shaming logical fallacies. Sure, people do it imperfectly, and a lot more readily for the opposing side than for themselves, arguments are soldiers etc. But it's still harder to get away with them, for an overall positive result on truth-seeking.
This is a clever idea. I'm stealing it.
Please include the cityin the meetup title, so that it's easily identifiable on the sidebar.
Fair point. Apologies to anyone else wearing the no-hug tag.
We wanted to encourage hugging by letting people put a “accepting hugs as a form of greeting” sticker on their extended name tags. To our surprise it was adopted by a huge majority and had an immense effect on social interactions by creating an atmosphere of familiarity.
Only person wearing a no-hug tag unironically here: those do not work. I did less socializing than most, but still had to interrupt a few hugs (in one case by someone wearing an ironic no-hug tag) to my discomfort and their guilt. But a pro-hug culture seems so good for the community that I should probably hack myself/spend a spoon to let people hug me rather than impose costly social rules on everyone else.
The "hugs" and "no touching" symbols were visually similar -- a red and a blue circle, overlapping in one case, not overlapping in another case -- maybe some people made a honest mistake. It would be better next time to make visually more different symbols; for example completely different colors, or even a picture of hedgehog for "no touching". I hope that would improve the situation.
By the way, I was somewhat concerned to see the mixed signals of some people wearing both "hugs" and "no touching" symbols; ...
I should probably hack myself/spend a spoon to let people hug me rather than impose costly social rules on everyone else.
No. "Respect my boundaries" (in this case, quite physical ones) is not something that would count as a costly social rule that you'd be imposing on others. Enforcing an "only hug the people who want to be hugged" rule doesn't only help you, it also helps everyone else who might not feel entirely comfortable with hugs. And on a more general level, having a strict norm of trying to make everyone feel comfortable will...
In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god.
If it's an iterated game, then the decision to pay is a lot less unintuitive.
Karma is currently very visible to the writers. If you give little positive and negative points to human beings, they will interpret it as reward/punishment, no matter what the intent was. As a meetup organiser, I know I do feel more motivated when my meetup organisation posts get positive karma.
(Reposted from the LW facebook group)
The next LW Brussels meetup will be about morality, and I want to have a bunch of moral dilemmas prepared as conversation-starters. And I mean moral dilemmas that you can't solve with one easy utilitarian calculation. Some in the local community have had little exposure to LW articles, so I'll definitely mention standard trolley problems and "torture vs dust specks", but I'm curious if you have more original ones.
It's fine if some of them use words that should really be tabooed. The discussion will double as a...
I'd already signed up without knowing it was on the MIRI course list.
(Updated with topic and some news.)
This link is dead (possibly because the blog has been hidden then re-opened in the interval). Could you please update it?
if the proposition was actually false then at some point someone would have noticed.
You're thinking of real human beings, when this is just a parable used to make a mathematical point. The "advisors" are formal deterministic algorithms without the ability to jump out of the system and question their results.
If I were designing an intelligence, I'm not sure how much control I would give it over its own brain.
This sounds like it has the same failure modes as boxing. E.g. an AI doesn't need direct Write access to its source code if it can manipulate its caretakers into altering it. Like boxing, it slows things down and raises the threshold of intelligence required for world domination, but doesn't actually solve the problem.
It's also a speed-boosting item in the video game Terraria. (I did not know the meaning of the word until now.)
If that's what makes the world least convenient, sure. You're trying for a reductio ad absurdum, but the LCPW is allowed to be pretty absurd. It exists only to push philosophies to their extremes and to prevent evasions.
Your tone is getting unpleasant.
EDIT: yes, this was before the ETA.
In the least convenient possible world, condemning an innocent in this one case will not make the system generally less worthy of confidence. Maybe you know it will never happen again.
Thank you. Problem solved.
Well now I have both a new series to read/watch and a major spoiler for it.
I've announced a meetup but got the day and year wrong (it should be December 14, 2013). Can someone tell me how to fix it, please? I can't figure it out.
[insert obvious joke about meetup topic]
Who puts sanitation next to recreation? Well here's why your excretory organs should be separate from your other limbs and near the bottom of your body.
Okay, but why should the reproductive outlets be there too?
I agree connotationally, but the comic only answers half of the question.
I am a fan of SMBC, but the entire explanation is wrong. The events that led to the integration of reproductive and digestive systems happened long before a terrestrial existence of vertebrates, and certainly long before hands. To get a start on a real explanation you have to go back to early bilaterals:
http://www.leeds.ac.uk/chb/lectures/anatomy9.html
As near as I can tell it was about pipe reuse. But you can't make a funny comic about that (or maybe you can?). Zach is a "bard", not a "wizard." He entertains.
What's the general atmosphere for newcomers like?
Friendly curiosity.
There will probably be at least one other newcomer to this meetup.
How much familiarity with Less Wrong is expected?
None. LessWrong is in the name, but really we're more interested in building a community of like-minded people to have interesting discussions with.
How does a meetup generally looks like?
We're a fairly small group at the moment; expect 3-5 people on an average meetup. It's very informal. Mostly we just talk about interesting things we've read or experienced, often ...
it is a heck of a lot more likely that this weird childhood experience subtly affected my interests over the course of my life and led me to eventually study the field that I studied.
Or that you overheard (or otherwise encountered) something about microhydraulics, which caused both your fantasy and your PhD choice.
I'll just keep the prefix/suffix as is and hope for the best then ("pancailloutisme").
I'm in the process of translating some of the Sequences in French. I have a quick question.
From The Simple Truth:
Mark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”
This is clearly a joke at the expense of some existing philosophical position called pan[something] but I can't find the full name, which may be necessary to make the joke understandable in French. Can anyone help?
This is all hindsight; pointing out the greatest sources of misery in the world, whatever they happen to be, and calling them a devious plot.
It seems to me that you could write the same article whether we were living in a post-apocalyptic wasteland ("what better way to cause ceaseless misery than ZOMBIES?") or in a near-utopia ("perfect bliss ruined by dust specks? how wonderfully efficient!").
Got 7-6-7 with the same tactic. Apparently the computer only looks at the last 4 throws, so as long as you're playing against Veteran (where your own rounds will be lost in the noise), it should be possible for a human to learn "anti-anti-patterns" and do better than chance.
It is much more useful to point out why the post is bad (that reason possibly being something bad that cults also do) than to just say "this is cultish".
It would help if I knew that you and I think exactly the same way.
If this is true, then when I decide to Give, I know you will Give too.
That was a clever hypothesis when there was just the one experiment. The hypothesis doesn't hold after this thread though, unless you postulate a conspiracy willing to lie a lot.
The number of people actually playing this game is quite small, and the number of winning AIs is even smaller (to the point where Tuxedage can charge $750 a round and isn't immediately flooded with competitors). And secrecy is considered part of the game's standard rules. So it is not obvious that AI win logs will eventually be released anyway.
Pascal's wager: If you don't do what God says, you will go to Hell where you will be in a lot of pain until the end of time. Now, maybe God is not real, but can you really take that chance? Doing what God says isn't even that much work.
Pascal's mugging: I tell you "if you don't do what I say, something very bad will happen to you." Very bad things are probably lies, but you can't be sure. And when they get a lot worse, they only sound a little bit more like lies. So whatever I asked you to do, I can always make up a story so bad that it's safer to give in.
That's not a promise. It's not even agreement.
Besides, Dumbledore could have made him promise more explicitly off-screen and this is just Moody doing the same independently or reiterating it.
This is quite possible. However, it does not sound like Moody's reiterating. And I find it improbable that Dumbledore included the "don't touch a pen" clause (that's more Moody's style), but no other clause, and then Moody independently, coincidentally added that clause and no other clause.
I managed to get it to output this prompt. It's possible it's hallucinating some or all of it, but the date at least was correct.
... (read more)