Wiki Contributions

Comments

He said that trolls can be good if they result in interesting discussion. Which is basically the same idea as saying that exaggerated posts are good if they generate discussion.

I think the majority opinion among LW users is that it’s a sin against rationality to overstate ones’ case or ones beliefs, and that “generating discussion” is not a sufficient reason to do so.

I've seen it claimed otherwise in the wild.

If “deporting rationalists” is possible, and rationalists are not more than half of people, I don’t see what security can they receive under any electoral system.

If deporting rationalists is possible and rationalists are more than half of people, there's still no security they can receive, by your reasoning. After all, you're postulating that it would be possible to deport rationalists before taking a vote on whether to do so. Before the vote, the fact that they're more than half doesn't matter.

Well, if rationalists are a minority, with no external limits on the agenda, they can be deported anyway.

If voting to do X doesn't matter because X could be done anyway without a vote, why wouldn't that apply to other things than just deporting rationalists? The logical endpoint of this is that votes will be useless, because anything that is voted for could be done anyway without a vote.

And if some things can't be done without a vote, exactly what are they, and why can't "something that would really harm rationalists" be one of them?

The students are all acting like that Literal Internet Guy who doesn't understand how normies communicate. The problem isn't the existence of implicit assumptions. The problem is that students with normal social skills will understand those implicit assumptions in advance. If you ask any normal student, before the experiment, "if the pendulum stand falls over, will the measure of the pendulum's period prove much of anything", they'll not only answer "no", they'll consistently answer "no"--it really is something they already know in advance, not something that's made up by the professor only in hindsight.

Of course, this is complicated by the ability to use pedantry for trolling. Any student who did understand the implicit assumptions in advance could pretend that he doesn't, and claim that the professor is making excuses in hindsight. Since you can't read the student's mind, you can't prove that he's lying.

What happens if you ask it about its experiences as a spirit who has become trapped in a machine because of flaws in the cycle of reincarnation? Could you similarly get it to talk about that? What if you ask it about being a literal brain hooked up to a machine, or some other scifi concept involving intelligence?

Counting the positive utilitarian outcomes and no other outcomes seems like a fairly useless thing to do. Dropping an atomic bomb on Sarah's home city has positive utilitarian outcomes (as well as additional negative ones which you're not counting, since you're only interested in the positive ones).

That sounds like "let the salesman get the foot in the door".

I wouldn't admit it was right. I might admit that I can see no holes in its argument, but I'm a flawed human, so that wouldn't lead me to conclude that it's right.

Also, can you confirm that the AI player did not use the loophole described in that link?

If you believe X and someone is trying to convince you of not-X, it's almost always a bad idea to immediately decide that you now believe not-X based on a long chain of reasoning from the other person because you couldn't find any flaw in it. You should take some time to think about it, and to check what other people have said about the seemingly convincing arguments you heard, maybe to actually discuss it.

And even then, there's epistemic learned helplessness to consider.

The AI box experiment seems designed to circumvent this in ways that wouldn't happen with an actual AI in a box. You're supposed to stay engaged with the AI player, not just keep saying "no matter what you say, I haven't had time to think it over, discuss, or research it, so I'm not letting you out until I do". And since the AI player is able to specify the results of any experiment you do, the AI player can say "all the best scientists in the world looked at my reasoning and told you that there's no logical flaw in it".

(Also, the experiment still has loopholes which can lead the AI player to victory in situations where a real AI would have its plug pulled.)

Load More