Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Plasmon 27 March 2015 03:24:40PM *  0 points [-]

you mustn´t make a religious belief into a premise for science

I strongly disagree. If religion were true, that would be exactely what you should do.

Of course you can´t mix up scientific work with religion.

Why?

That statement is widely accepted today, but it is only widely accepted because virtually all attempts to do so have failed.

What happened is the following: people did try to base science on religion, they did make interesting predictions based on religious hypotheses. By elementary Bayesian reasoning, if an observation would be evidence for a religion, not observing it is evidence (though possibly weak evidence) against that religion. That is hard to accept for religious people, thus they took the only remaining option : they started pretending that religion and science are somehow independent things.

Imagine - just imagine! - that Decartes did find a soul receiver in the pineal gland. Imagine that Newton did manage to find great alchemical secrets in the bible. Imagine! If that would have happened, do you think anyone would claim that "of course you can´t mix up scientific work with religion" ?

Comment author: Salemicus 27 March 2015 03:21:30PM 0 points [-]

The main doctrinal current of Islam in the UAE is wahhabism

Are you sure you don't mean Saudi Arabia? The UAE is not a Wahhabi country.

Let's also not forget that UAE are ISIS' main financer, and the country where 9/11 terrorists came from.

This is balderdash. The 9/11 terrorists came overwhelmingly from Saudi Arabia. ISIS gets its funding mostly from the territory it controls and its foreign backers are mostly in Saudi and Qatar.

Do you know anything about the UAE?

Comment author: Lumifer 27 March 2015 03:17:40PM 1 point [-]

First, only large errors will cost you 5-10 karma and you should stop doing them pretty fast.

Second, you don't need positive karma to post comments (as opposed to top-level posts) and that's more than enough for a lot of participation and learning.

This site calls itself a "community blog", but it lies -- it's actually a forum and you could be an active participant without ever making a top-level post.

In response to Boxing an AI?
Comment author: AABoyles 27 March 2015 03:12:53PM 0 points [-]

It's not a matter of "telling" the AI or not. If the AI is sufficiently intelligent, it should be able to observe that its computational resources are bounded, and infer the existence of the box. If it can't make that inference (and can't self-improve to the point that it can), it probably isn't a strong enough intelligence for us to worry about.

In response to Boxing an AI?
Comment author: Error 27 March 2015 03:12:34PM *  0 points [-]

If I remember right, this has already been considered and the argument against it is that any AI powerful enough to be interesting will also have a chance to correctly guess that it's in a box, for more or less the same reason that you or I can come up with the Simulation Hypothesis.

[Edit: take that with salt; I read the discussions about it after the fact and I may be misremembering.]

Comment author: nydwracu 27 March 2015 03:10:45PM 0 points [-]

The fairness foundation isn't universal. I know people who test low for it, but that may just be a testing artifact. "Whether or not some people were treated differently than others" -- people are different, so of course there will be circumstances where it's right to treat them differently. There are some cultures where it's probably legitimately absent.

A: "That's not fair!"

B: "Life isn't fair."

Also, I don't think liberal language hides the purity intuition.

Comment author: Lumifer 27 March 2015 03:09:02PM 0 points [-]

otherwise you're motivating Jill to self-modify into a negative utility monster.

I actually know a woman who was a nice and reasonable human being, and then had a very nasty break-up with her boyfriend. Part of that nasty break-up was her accusations of physical abuse (I have no idea to which degree they were true). This experience, unfortunately, made her fully accept the victim identity and become completely focused on her victim status. The transformation was pretty sad to watch and wasn't good for her (or anyone) at all.

Comment author: Okeymaker 27 March 2015 03:08:08PM 1 point [-]

Yeah, I got that pretty fast. But it will take time to learn by trial and error when every error count´s for 5-10 karma and you need 2 karma to do anything at all here :D

Comment author: Stuart_Armstrong 27 March 2015 03:01:09PM 0 points [-]

The module is supposed to be a predictive model of what humans mean or expect, rather than something that "convinces" or does anything like that.

Comment author: fezziwig 27 March 2015 03:00:06PM 0 points [-]

Yes, I agree. That's why I like the analogy to composition: most of the songs you might write, if you were sampling at random from song-space, are terrible. So we don't sample randomly: our search through song-space is guided by our own reactions and a great body of accumulated theory and lore. But despite that, the consensus on which songs are the best, and on how to write them, is very loose.

(Actually it's worse, I think composition is somewhat anti-inductive, but that's outside the scope of this thread)

My experience is that naming is similar. There are some concrete tricks you can learn -- do read the C2 wiki if you don't already -- and there's a little bit of theory, some of which I tried to share insofar as I understand it. But naming is communication, communication requires empathy, and empathy is a two-place word: you can't have empathy in the abstract, you can only have empathy for someone.

It might help to see a concrete example of this tension. I don't endorse everything in this essay. But it's a long-form example of a man grappling with the problem I've tried to describe.

Comment author: Lumifer 27 March 2015 02:57:12PM 1 point [-]

Y'know, you do sound mindkilled about NRx...

Comment author: TheAncientGeek 27 March 2015 02:47:50PM 0 points [-]

A model can be useful without corresponding, though,

The Ptolemaic system can be made as accurate as you want for generating predictions.

Comment author: Lumifer 27 March 2015 02:42:49PM 1 point [-]

when I read, "Shall we just tell the Greeks to go jump into the Aegean sea?" I thought "Iliad".

:-D Yep. That's a good thing.

Comment author: DeVliegendeHollander 27 March 2015 02:39:42PM *  0 points [-]

So, the Android app doesn't report back page views, did I get that right? With tablets the gap between mobile and non-mobile is shirking, I use a tablet for reading the web very often even when having access to a laptop, I would not use a phone for that, and with a keyboard and perhaps a larger tablet I could do much of my work on it. (Fun anecdote: there is such a thing as a country with most of its current Constitution assembled on an iPad.) I don't want to take any position on the issue but I would recommend urgently fixing the Android / Apple app to report back page views, from my angle Wikipedia-in-a-browser is rapidly becoming obsolete. It is an encyclopedia. Encyclopedias are books. I read books comfortably on the couch, on a tablet or Kindle, since they don't require high-bandwith text entry. I don't go to the desk and crouch over the laptop to read an encyclopedia, it is bad enough to do it when working or writing. (I find it funny how Gizmodo asked if people still use tablets? People still breathe? To me, tablets ergonomically book-ifying my web reading experience was the best thing that happened since the invention of the wifi.)

Comment author: Lumifer 27 March 2015 02:38:44PM 0 points [-]

How do you calculate what might or might not be "a net gain"?

Comment author: Lumifer 27 March 2015 02:33:39PM 1 point [-]

Could you at least pretend like you are trying to engage in reasonable debate.

Not with this beginning, I couldn't.

Comment author: Lumifer 27 March 2015 02:31:21PM 0 points [-]

Quiverfull people self-select into having lots of children. Women of a particular ethnic background do not.

Comment author: Lumifer 27 March 2015 02:29:46PM 1 point [-]

If I did wrong, shouldn´t it be more constructive to let me know than to simply downvote instantly and leave?

You assume that people on an internet forum have an obligation to you to "be constructive" and explain things. That is... not so.

You are in a new to you social setting with its own norms and customs. You're trying to figure out these norms and customs, partially through trial and error. That's perfectly fine. Put your ego aside and treat downvotes as noisy signals as what some people here find acceptable and what they do not.

Figure out things yourself, do not rely on others explaining them to you.

Comment author: tailcalled 27 March 2015 02:25:47PM 0 points [-]

I dislike the concept of qualia because it seems to me that it's just a confusing name for "how inputs feel from the inside of an algorithm".

Comment author: tailcalled 27 March 2015 02:19:05PM 0 points [-]

My point is, however, that this is not unique to noncentral fallacy.

Comment author: tailcalled 27 March 2015 02:15:57PM 0 points [-]

The problem is that the 'human interpretation module' might give the wrong results. For instance, if it convinces people that X is morally obligatory, it might interpret that as X being morally obligatory. It is not entirely obvious to me that it would be useful to have a better model. It probably depends on what the original AI wants to do.

Comment author: casebash 27 March 2015 02:00:19PM 0 points [-]

Interesting idea. There doesn't seem to be much traffic there, I wonder if the mods would be open to it?

Comment author: TheAncientGeek 27 March 2015 01:59:06PM 0 points [-]

It is not at all obvious to me that any optimizer would be personlike

It is not at all obvious to me that being personlike is necessary to have qualia at all, for all that might be necessary for having personlike qualia.

Comment author: Stuart_Armstrong 27 March 2015 01:57:24PM 0 points [-]

3 is the general problem of AI's behaving badly. The way that this approach is supposed to avoid that is by having constructing a "human interpretation module" that is maximally accurate, and then using that module+human instructions to be the motivation of the AI.

Basically I'm using a lot of the module approach (and the "false miracle" stuff to get counterfactuals): the AI that builds the human interpretation module will build it for the purpose of making it accurate, and the one that uses it will have it as part of its motivation. The old problems may rear their heads again if the process is ongoing, but "module X" + "human instructions" + "module X's interpretation of human instructions" seems rather solid as a one-off initial motivation.

Comment author: casebash 27 March 2015 01:56:42PM 1 point [-]

The problem is that to contribute to that I would have to follow like 50 tumblrs and try to convince people to follow me as well

Comment author: chaosmage 27 March 2015 01:56:24PM *  0 points [-]

I wonder what happens if I follow this to what I viscerally feel to be a supervillain: Death itself.

Maybe Death is like Gaddafi. It is very bad, but gives a semblance of order, and removing that might not make living in its former domain any better immediately. It is hard to even make a guess at the probability of this, given that we don't know what defeating Death would mean or even look like, but we can try to build scenarios and select among post-villain worlds before its removal, in the same way that we and the Libyans would wish had been done in the Gaddafi case.

Do we want to retain Death as an option? Or should immortality, once accepted, be compulsory? Fictional immortalities tend to not be absolute: if you wanted to kill yourself, you could still jump into a black hole or something. But suppose those aren't possible, and a more modest type of durability can be rounded up into immortality by some cognitive modifications that create extreme aversion to bodily harm. Even if those didn't necessarily come with durability, people in love could still blackmail each other into accepting such modifications, and spend billions of years living a life that isn't actually a choice, sort of like in Friendship is Optimal.

Then there are the innumerable ways immortality could be worse than death - many of these have already been explored in fiction. Maybe you permanently lose 1% of your capacity for reason and happiness every 50 years, and unlike Death that isn't fixable - or whatever.

But even if both of these were averted and immortality was otherwise perfect for the immortal individual, it might still be a bad thing for humanity or life. If it is not available to everyone - because it is too expensive, classified as Top Secret military technology or only works on people with certain inalienable features like particular chromosomes or a Wizard Gene - that gives a whole new level to societal inequality. Death used to be the great equalizer - remove it and the rich (or Wizard Gene holders or whatever) become so dissimilar from mortals as to be practically a seperate species. And the only comparative advantage mortals retain is the ability to do suicide attacks. That kind of scenario can go wrong in all sorts of ways. It can still be a net gain for a select group of individuals - like getting rid of Gaddafi was a net gain for the people in his prisons - while a net loss for a larger number of other individuals.

You could construct even weirder scenarios, where immortalizing one individual removes one distant and probably lifeless galaxy, but I find it too hard to suspend my disbelief about such scenarios for them to help me think.

So how would removing Death lead to a certainly improved universe? I think that at a minimum, it'd have to be available to every human, retain Death as an option and not have side-effects that could ever escalate into fates worse than Death. That's a much, much taller order than "just invent uploading and declare victory" and might be a hopeless endeavor even if uploading is possible.

After going through these imaginations with something I viscerally feel to be a villain, I can kind of understand the impulse to just remove Gaddafi and hope for the best, rather than plan (and take responsibility) for what happens after.

Comment author: Raemon 27 March 2015 01:54:21PM 0 points [-]

I hadn't noticed a trend of political posts on LW, so hadn't been worried about this specific phenomenon.

Comment author: casebash 27 March 2015 01:52:54PM *  0 points [-]

Well, you can mention it's non-central, just the second you call it a fallacy, it shuts down the conversation.

What I do depends on the situation. Sometimes you can bite the bullet and say, "he's a good kind of criminal". Other times this isn't an option, so you can try arguing that the definition of the world isn't important, what's important is looking at the facts

Comment author: Vaniver 27 March 2015 01:38:27PM 0 points [-]

What I've been noticing is that right now, Slatestarcodex is sort of the place people go to talk about politics in a rationality-infused setting, and the comments there have been trending in the direction you'd caution about. (I'm not sure whether to be sad about that or glad that there's a designated place for political fighting)

I would be very sad if LW comments went the way of Slate Star Codex comments.

Comment author: tailcalled 27 March 2015 01:37:21PM 0 points [-]
  1. Which leads to the obvious question of whether figuring out the rules about the questions is much simpler than figuring out the rules for morality. Do you have a specific, simple class of questions/orders in mind?

  2. Yes, but it seems to me that your approach is dependent on an 'immoral' system: simulating humans in too high detail. In other cases, one might attempt to make a nonperson predicate and eliminate all models that fail, or something. However, your idea seems to depend on simulated humans.

  3. Well, it depends on how the model of the human works and how it is asked questions. That would probably depend a lot on how the original AI structured the model of the human, and we don't currently have any AIs to test that with. The point is, though, that in certain cases, the AI might compromise the human, for instance by wireheading it or convincing it of a religion or something, and then the compromised human might command destructive things. There's a huge, hidden amount of trickiness, such as determining how to give the human correct information to decide etc etc.

Comment author: Zubon 27 March 2015 01:36:21PM 0 points [-]

Rationalist Tumblr discusses politics and culture, but it is definitely not hard to find; the quality of discussion may be higher than the Tumblr average but probably not what you are looking for. On the plus side, most of us have different usernames there, so you can consider ideas apart from author until you re-learn everyone. Which happens pretty quickly, so not a big plus.

The Tumblr folks seem to mostly agree that Tumblr is not the optimal solution for this, but it has the advantage of currently existing.

Comment author: AnthonyC 27 March 2015 01:35:49PM 0 points [-]

I think seer and Nancy are using two different definitions of "decency."

"modesty and propriety" vs. "polite, moral, and honest behavior and attitudes that show respect for other people"

Also, if we take google's usage-over-time statistics, the big drop in usage of the (English) word "decency" happened in the 1800s: http://bit.ly/1D5ZF55

Comment author: Zubon 27 March 2015 01:30:13PM 2 points [-]

I'm not sure if it says more about me or the context I'm used to seeing here at Less Wrong, but when I read, "Shall we just tell the Greeks to go jump into the Aegean sea?" I thought "Iliad" before "ongoing economic crisis." If that order ever flips, we may have gotten too much into current events and lost our endearing weirdness and classicism.

Comment author: tailcalled 27 March 2015 01:29:27PM 0 points [-]
  1. Most people I've talked to have one or two world changing schemes that they want to implement. This might be selection bias, though.

  2. It is not at all obvious to me that any optimizer would be personlike. Sure, it would be possible (maybe even easy!) to build a personlike AI, but I'm not sure it would "necessarily" happen. So I don't know if those problems would be there for an arbitrary AI, but I do know that they would be there for its models of humans.

Comment author: Zubon 27 March 2015 01:24:13PM 0 points [-]

Your kind words honor me.

Comment author: TheAncientGeek 27 March 2015 12:55:46PM *  0 points [-]
  1. If the human is in of the very few with a capacity or interest in grand world changing schemes, they might have trouble coming up with a genuine utopia. If they are one of the great majority without, all you can expect out of them is incremental changes.

  2. And there isnt a moral dilemma in building the AI in the first place, even though it is ,by hypothesis, a superset of the human? You are making an assumptions or two about qualia, and they are bound to .be unjustified assumptions.

Comment author: owencb 27 March 2015 12:37:16PM 0 points [-]

No, it's supposed to be annual spend. However it's worth noting that this is a simplified model which assumes a particular relationship between annual spend and historical spend (namely it assumes that spending has grown and will grow on an exponential).

Comment author: Viliam_Bur 27 March 2015 12:30:07PM *  -1 points [-]

An alternative without programming changes would be biweekly "incisive open threads", similar to Ozy's race-and-gender open threads

Feel free to start a "political thread". Worst case: the thread gets downvoted.

However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then -- because it is likely to happen again.

and downvoting customarily tabood in them.

Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies.

Another danger is that political debates will attract users like Eugine Nier / Azathoth123.

Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called "neoreactionaries". They are something like "reinventing Nazis for clever contrarians"; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like "Cthulhu always swims left".) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how "Cthulhu always swims left", and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way.

I am afraid that if you start a political thread, you will get many comments about how "Cthulhu always swims left", and anyone who reacts negatively will be accused of being a "progressive" (which in their language means: not a neoreactionary). If you will ask for further explanation, you will either receive none, or a link to some long and obscurely written article by Moldbug. If you downvote them, they will create sockpuppets and upvote their comments back; if you disagree with them in debate, expect your total karma to magically drop by 100 points overnight.

Therefore I would prefer simply not doing this. But if you have to do it, give it a try and see for yourself. But please read the older political threads first.

Comment author: djm 27 March 2015 12:18:31PM 2 points [-]

Even without an AI, the current trend may well have a world where there is a blurring of real Football matches and simulations.

Certainly you can’t keep an AI safe by using such a model of football

I used to think that a detailed ontological mapping could provide a solution to keeping AI's safe but have slowly realized that it probably isn't likely to work overall. It would be interesting to test this though for small specifically defined domains (like a game of football) - it could work, or at least it would interesting to make a toy experiment to see how it could fail.

Comment author: JohnBuridan 27 March 2015 12:00:27PM *  2 points [-]

This year's receiver of the Carl Sagan Award was a Jesuit Brother. I find it very funny, although I don't know if I should.

From what I understand .there are a lot of established and respectable scientists who are theists. Anyone could go on a treasure hunt for more, but it doesn't prove anything. It's just a numbers game.

Comment author: ChristianKl 27 March 2015 11:53:44AM 0 points [-]

When talking about issues of political philosophy you often tend to talk quite vaguely and are to vague to be wrong. That's not being mind-killed but it's also not productive.

If you want to decide whether unemployment is bad or not than factual questions about unemployment matter a great deal. How does unemployment affect the happiness of the unemployed? To what extend do the unemployed use their time to do something useful for society like volunteering?

Comment author: ChristianKl 27 March 2015 11:53:21AM 0 points [-]

Utilizing positive social influence is a pretty common tactic for fighting drug addictions (like in AA), but I haven't really heard of it being used to fight unproductivity.

It's done a lot by different people.

I think there a LW group in Germany who starts their meetups like this. I think in Berlin we did it a while but didn't really now what to do with people who didn't fulfill on their goals. A few people of your Berlin group did pair coaching with each other.

At the moment I do pair coaching with someone from outside the LW sphere.

Comment author: Armok_GoB 27 March 2015 11:47:55AM 0 points [-]

This thing is still alive?! :D I really should get working on that updated version sometime.

Comment author: Sean_o_h 27 March 2015 11:42:31AM 0 points [-]

Placeholder: this is a good comment and good questions, which I will respond to by tomorrow or Sunday.

Comment author: Stuart_Armstrong 27 March 2015 11:33:23AM 0 points [-]

Problem one can be addressed by only allowing certain questions/orders to be given.

Problem two is a real problem, with no solution currently.

Problem three sounds like it isn't a problem - the initial model the AI has of a human, is not of a wireheaded human (though it is of a wireheadable human). What exactly did you have in mind?

Comment author: Stuart_Armstrong 27 March 2015 11:31:13AM 0 points [-]

Indeed. What I'm trying to do here is seeing if there is a way to safely let the AI solve the semantics problem (probably not, but worth pondering).

Comment author: DanArmak 27 March 2015 11:22:52AM 0 points [-]

Clippy doesn't exist, but according to my simulations, if he did exist he'd probably be roughly forcing me onto the bed and undoing my bra right now.

Then he'd forget all about you as he started making your bra wire into a paperclip.

Comment author: Vladimir_Nesov 27 March 2015 11:21:53AM 2 points [-]

Labeling Noncentral Fallacy a "fallacy" is a noncentral case of Noncentral Fallacy, so calling said labeling an example of Noncentral Fallacy would be an example of Noncentral Fallacy.

Comment author: TheAncientGeek 27 March 2015 11:20:29AM 0 points [-]

Yes. So is Google.

You can have holism without coherence where you require that the whole of science is true by correspondence, but the parts aren't.. Inasmuch as it is correspondence, it isnt coherence.

Comment author: DanArmak 27 March 2015 11:18:29AM 0 points [-]

He was 1 in a million.

But there's 7000 people who are one in a million and only 10 of them get to be in the top 10 best selling authors of the decade.

But everyone knows one in a million chances almost always work!

Comment author: Stuart_Armstrong 27 March 2015 11:12:32AM 1 point [-]

As a minor "argument from authority", I'd like to state that Sean has done really good work at the FHI before moving across to found CSER, and that CSER has the full support of the FHI as something that is worthwhile and doing important work. So if you trust the FHI's judgement in this area, then trust that CSER is a very positive development.

Comment author: ChristianKl 27 March 2015 11:09:30AM *  1 point [-]

Since I don´t think many people even know these guys believed in a god whatsoever

Why do you think so? Especially if "many people" is about the well educated LW community.

In response to comment by Manfred on What I mean...
Comment author: Stuart_Armstrong 27 March 2015 11:00:29AM 0 points [-]

Even if everything is transparent and modular, I think it's only going to represent human understanding if, as above, it represents humans as things with high-level understanding attributes.

Can you develop that thought? You might be onto a fundamental problem.

Comment author: Stuart_Armstrong 27 March 2015 10:57:31AM 0 points [-]

A model has the advantage of staying the same across different environments (virtual vs real, or different laws of physics).

I'm thinking "we are failing to define what human is, yet the AI is likely to have an excellent model of what being human entails, that model is likely a better definition that what we've defined".

Comment author: DeVliegendeHollander 27 March 2015 10:45:54AM 1 point [-]

This is all fine, but what is missing for me is the reasoning behind something like "... and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks" - at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry.

Idea: let's try to convince the webmaster to make a third "quarantine" tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there.

An alternative without programming changes would be biweekly "incisive open threads", similar to Ozy's race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?

Comment author: DeVliegendeHollander 27 March 2015 10:35:30AM 0 points [-]

Is holism even a thing?

Comment author: Okeymaker 27 March 2015 10:30:56AM 0 points [-]

No, today's good theistic scientists, to the extent that they still exist, are precisely those who have stopped to take religion seriously as a scientific hypothesis.

That is extremely obvious and something of the first thing I said in this article is that you mustn´t make a religious belief into a premise for science. Of course you can´t mix up scientific work with religion.

Comment author: NancyLebovitz 27 March 2015 10:29:01AM 0 points [-]

Is Melkor explicitly described as unredeemable?

As I recall, Eru's creation is incomplete, and we cannot know all the outcomes.

Comment author: Viliam_Bur 27 March 2015 10:22:27AM 0 points [-]

I like your example and "learning environment" vs "testing environment".

However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word "rationality" simply as a slogan to mean "my tribe is better than your tribe".

There were a few situations when people wrote (on their blogs) something like: "first I liked LW because they are so rational, but then I was disappointed to find out they don't fully support my political faction, which proves they are actually evil". (I am exaggerating to make a point here.) And that's the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.

Comment author: TheAncientGeek 27 March 2015 10:17:13AM 0 points [-]

Are you so sure your preferred Truth modalities are better than theirs at winning?

I you would have thought a discussion of the nature of truth came under epistemic rationality.

Comment author: TheAncientGeek 27 March 2015 09:56:16AM *  0 points [-]

The correspondence theory of truth is a theory of truth, not a theory of justification. Correspondentists don't match theories to reality, since they don't have direct ways of detecting a mismatch, they use proxies like observation sentences and predictions. Having justified the a theory as being true, they then use correspondence to explain what it's truth consists of.

Comment author: DeVliegendeHollander 27 March 2015 09:47:38AM 0 points [-]

Hmmm... is it sure autonomy is a condition for that? It seems to me Zen monks train for something like the flow all the time and they don't have much of it.

Also, there is the personal kind of autonomy of doing your own work without others bothering you, and the democratic kind of autonomy when having input into what the company as a whole does, the project as a whole, and I think this second cannot really be relevant to it -> the boss will not distrupt flow if he makes all those decisions alone, and leaves autonomy for people to work out the details in the bits and pieces they work with.

No, I meant something far worse than that by rubbing. Intimidation, status symbols, belittling, inequal titles (i.e. calling employees on first name terms but expect to be called back on surname terms) and so on. But yes, this also sounds kinda bad too. This is only bad if employees care about their work and not working just because they must. The other kind of bad is always bad.

Comment author: TheAncientGeek 27 March 2015 09:45:13AM 0 points [-]

You've got coherentism confused with holism.

Comment author: DeVliegendeHollander 27 March 2015 09:37:56AM 0 points [-]

But I think I am "strong" enough to avoid my usual tribal arguments ("copy is not stealing as it does not remove the original") and be fully consequentualist ("copying kills pop culture, and it is good because") and how would that be a bad thing? My point is precisely that we are probably strong enough to discuss such topics without slogan-chanting and well within epistemic rationality.

And I am unsure how you didn't recognize that the sentence you quoted is not the usual four-legs-good tribal chant but something with a clear consequence predicted which is easy to approach rationally ("what is the chance it kills pop culture?" "what is the chance good things happen if pop culture gets killed?")

Comment author: DeVliegendeHollander 27 March 2015 09:34:05AM 0 points [-]

Yes, but Bayesian rules are about predictions e.g. would a policy what it is expected to do e.g. does raising the min wage lead to unemployment or not, and political philosophy is one meta-level higher than that e.g. is unemployment bad or not, or is it unjust or not. While it is perhaps possible and perhaps preferable to turn all questions of political philosophy into predictive models, changing some of them and some other questions simply dissolved (i.e. is X fair?) if they cannot be, that is not done yet, and that is precisely what could be done here. Because where else?

Comment author: DeVliegendeHollander 27 March 2015 09:29:14AM *  2 points [-]

First of all, the there is the meta-level issue whether to engage the original version or the pop version, as the first is better but the second is far, far more influential. This is an unresolved dilemma (same logic: should an atheist debate with Ed Feser or with what religious folks actually believe?) and I'll just try to hover in between.

A theory of justice does not simply describe a nice to have world. It describes ethical norms that are strong enough to be warrant coercive enforcement. (I'm not even libertarian, just don't like pretending democratic coercion is somehow not one.)

Rawls is asking us to imagine e.g. what if we are born with a disability that requires really a lot of investment from society to make its members live an okay life, let's call the hypothetical Golden Wheelchair Ramps.

Depending on whether we look at it rigorously, in a more "pop" version Rawls is saying our pre-born self would want GWR built everywhere even when it means that if we are born able and rich we taxed through the nose to pay for it, or in a more rigorous version 1% change to be born with this illness would mean we want 1% of GWRs built.

Now, this all is all well if it is simply understood as the preferences of risk-averse people. After all we have a real, true veil of ignorance after birth: we could get poor, disabled etc. any time. It is easy to lose birth privileges, well, many of them at least. More risk-taking people will say I don't really want to pay for GWR, I am taking my gamble tha I will be born rich and able in which case I won't need them and I would rather keep that tax money. (This is a horribly selfish move, but Rawls set up the game so that it is only about fairness emerging out of rational selfishness and altruism is not required in this game so I am just following the rules.)

However, since it is a theory of justice, it means the preferences of risk-aversge people are made mandatory, turned into a social policy and enforced with coercion. And that is the issue.

Now, how could Rawls (or pop-Rawlsians) get away with that? By assuming that all reasonable people are risk-averse anyway. In other words, turning risk-aversity into a tacit norm. Instead of seeing it negatively as a vice, or neutrally as a preference, it is basically a virtue here. Now, we have a perfect name for turning timidity into a norm: it is called cowardice.

And I think my argument managed to demonstrate avoiding in politics mind-killing up to the last sentence when I used a connotationally loaded word (cowardice), but at this point I had to, as I casually remarked earlier I feel this way about it and now had to explain why. But the last sentence refers only to my feelings and not an integral part of the argument, for the argument , just stop reading at "risk aversion should not be made into a norm and coercively enforced calling it justice".

Again, it is not part of the argument, but an explanation of my feelings: when I try to improve one my vices or weaknesses, and I see others almost see them as norms, I feel disgust. For example, willful stupidity disgusts me - I think this feeling may be common around here. But as I am also trying to work on my own cowardice, being too accepting of it also disgusts me.

Comment author: TheAncientGeek 27 March 2015 09:16:43AM 0 points [-]

So maths is physics.

But I can write an equation for an inverse cube law of gravity, which doesn't apply to this universe. What does it correspond to?

Comment author: MrMind 27 March 2015 09:10:32AM *  -2 points [-]

Would you apply the same logic to say the doctrinal differences between say Welfare-State Librelism and Communism (or Nazism)? Or this is just a case of "all ideologies that aren't mine look alike to me"?

No, I think those divergences are essentially different, being about something that at least exists.

How is the UAE fundamentalist?

The main doctrinal current of Islam in the UAE is wahhabism, which, quoting Wikipedia, is described as "orthodox", "ultraconservative", "austere","fundamentalist","puritanical". Let's also not forget that UAE are ISIS' main financer, and the country where 9/11 terrorists came from.

By a remarckable co-incidence Jim has recently posted a blog post on this very subject.

I kind of see where that post comes from. Although I think its commenters have seen too much in it, it's harder to remain level-headed when the threat is coming closer and closer (literally, in the case of my country).

Comment author: TheAncientGeek 27 March 2015 08:49:59AM 0 points [-]

I think you need to distinguish between rejecting the correspondence theory (wholly) , and rejecting the correspondence-only approach in favour of something more multifaceted. I'm happily in the latter camp, FYI.

Comment author: DeVliegendeHollander 27 March 2015 08:39:57AM 2 points [-]

And why do you think this is so?

Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term "norms" I understand as "do this or else". And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so.

So it is not that these things are norms but plain simply that they are allowed.

A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the optimal use of whatever amount of IQ there is. Thus, there is no shame in admitting ignorance or using wrong reasoning as long as one there is an effort to improve it.

I think this is why. And this has little to do with topics and little to do with enforced norms.

Comment author: Epictetus 27 March 2015 08:28:42AM 3 points [-]

Tolkien did offer a glimpse into a more realistic take on his story:

The real war does not resemble the legendary war in its process or its conclusion. If it had inspired or directed the development of the legend, then certainly the Ring would have been seized and used against Sauron; he would not have been annihilated but enslaved, and Barad-dur would not have been destroyed but occupied. Saruman, failing to get possession of the Ring, would in the confusion and treacheries of the time have found in Mordor the missing links in his own researches into Ring-lore, and before long he would have made a Great Ring of his own with which to challenge the self-styled Ruler of Middle-earth. in that conflict both sides would have held hobbits in hatred and contempt: they would not long have survived even as slaves.

Comment author: DeVliegendeHollander 27 March 2015 08:25:21AM 0 points [-]

This sounds simple enough, but I think this is actually a huge box of yet unresolved complexities.

A few generations ago where formal politeness and etiquette was more socially mandatory, the idea was that the rules go both ways: they forbid ways of speaking many people would feel offended by, on the other hand, if people still feel offended by approved forms of speaking, it is basically their problem. So people were expected to work on what they give and what they receive (i.e. toughen up to be able to deal with socially approved forms of offense): this is very similar how programmers define interface / data exchange standards like TCP/IP. Programmers have a rule of be conservative in what you send and be liberal in what you accept / receive (i.e. 2015-03-27 is the accepted XML date format and always send this, but if your customers are mainly Americans better accept 03-27-2015 too, just in case) and this too is how formal etiquette worked.

As you can sense, I highly approve of formal etiquette although I don't actually use it on forums like this as it would make look like a grandpa.

I think a formal, rules-based, etiquette oriented world was far more autism-spectrum friendly than todays unspoken-rules world. I also think todays "creep epidemic" (i.e. lot of women complaining about creeps) is due to the lack of formal courting rules making men on the spectrum awkward. Back then when womanizing was all about dancing waltzers on balls it was so much more easier for autism-spectrum men who want formal rules and algorithms to follow.

I think I could and perhaps should spin it like "lack of formal etiquette esp. in courting is ableist and neurotypicalist".

Of course, formal etiquette also means sometimes dealing with things that feel hurtful but approved and the need to toughen up for cases like this.

Here I see a strange thing. Remember when in the 1960's the progressive people of that era i.e. the hippies were highly interested in stuff like Zen? I approve of that. I think it was a far better world when left-wingers listened to Alan Watts. What disciplines like that teach is precisely that you don't need to cover the whole world with leather in order to protect your feet: you can just put on shoes. Of course it requires some personal responsibility, self-reflection and self-criticism, outer view etc. Low ego basically.

And somehow it disappeared. Much of the social-justice stuff today is perfect anti-Zen, no putting on mental shoes whatsoever, just complaining of assholes who leave pebbles on walkways.

This is frankly one of the most alarming development I see in the Western world. Without some Zen-like mental shoes, without the idea to decide to deal with some kinds of felt hurts, there cannot be a social level progress, just squabbling groupuscules.

But I am being offtopic here. No rape victim should be required to wear mental shoes, that kind of crime is simply too evil to put any onus on dealing with on the victim.

However, some amount of "creepy" behavior or hands-off sexual harassment may fall into this category.

Comment author: Epictetus 27 March 2015 08:05:24AM 0 points [-]

There are certain questions that have several possible answers, where people decide that a certain answer is obviously true and have trouble seeing the appeal of the other answers. If everyone settles on the same answer, all is well. If different people arrive at different answers and each believes that his answer is the obvious one, then the stage is set for a flame war. When you think the other guy's position is obviously false, it's that much harder to take him seriously.

Comment author: seer 27 March 2015 07:39:22AM 0 points [-]

Accepting for the moment that our stated principles are okay (which is where I expect you might disagree)

This is not a good thing to accept, since the stated principals are themselves subject to change. Hence

5. Once society starts taking complaint X seriously enough to punish the perpetrator, people start making (weaker) complaint X'. Once society takes that complaint seriously people start making complaint X'', etc.

I would argue that long term 5. is actually the biggest problem.

Comment author: seer 27 March 2015 07:31:08AM 0 points [-]

It simply turns the discussion away from "Does Jill feel hurt from what John did?"

How about the question "Is it reasonable for Jill to fill hurt from what John did?", otherwise you're motivating Jill to self-modify into a negative utility monster.

Comment author: Epictetus 27 March 2015 07:27:03AM *  1 point [-]

I'm put off by using a complex model as a definition. I've always seen a model as an imperfect approximation, where there's always room for improvement. A good model of humans should be able to look at a candidate and decide whether it's human with some probability p of a false positive and q of a false negative. A model that uses statistical data can potentially improve by gathering more information.

A definition, on the other hand, is deterministic. Taking a model as a definition is basically declaring that your model is correct and cuts off any avenue for improvement. Definitions are usually used for simpler concepts that can be readily articulated. It's possible to brute-force a definition by making a list of all objects in the universe that satisfy it. So, I could conceivably make a list of shirts and then define "shirt" to mean any object in the list. However, I don't think that's quite what you had in mind.

Comment author: Epictetus 27 March 2015 06:59:24AM 1 point [-]

Do you think that the Islamic State is an entity which will vanish in the future or not?

In the future? Yes. In the near future? Unlikely. The Islamic State is a reaction to forces that have been at work in the Middle East for some decades now and there are certain parties who think it in their short-term benefit for the Islamic State to continue its existence.

Do you think that their particularly violent brand of jihadism is a worse menace to the sanity waterline than say, other kind of religious movements, past or present?

No. It's violent enough that it's not the sort of thing that makes people insane, but rather the sort of thing that attracts the insane.

Do you buy the idea that fundamentalism can be coupled with technological advancement, so that the future will presents us with Islamic AI's?

It's possible. Terrorists tend to be educated and the most common college degree among them is engineering. There are certainly people with the relevant background to enable technological advancement.

Do you think that the very same idea of rationality can be the subject of existential risk?

No. If rationality ceased to exist tomorrow, someone would just reinvent it later. As long as people are around and civilization remains a possibility, rationality won't be permanently gone.

What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.

It happened under Obama's watch, so it's clearly evidence of the failure of leftist politics.

Comment author: seer 27 March 2015 06:55:04AM -1 points [-]

It was pretty clear from context what he meant.

Comment author: Plasmon 27 March 2015 06:46:51AM -1 points [-]

If you can only think of Francis Collins

I did say the only relatively well-known one, not the only one. Would you prefer if I used as an example Frank Tipler or Immanuel Velikovsky, both of whom make up exceedingly implausible hypotheses to fit their religious worldview, and are widely considered pseudoscientist because of that? Or Marcus Ross, who misrepresented his views on the age of the earth in order to get a paleontology phd?

No, today's good theistic scientists, to the extent that they still exist, are precisely those who have stopped to take religion seriously as a scientific hypothesis.

he had strong interests in Eastern religions

Being interested in religion does not a theist make. Nor does merely acknowledging the possibility of an unspecified creator entity, the simulation hypothesis is not theism.

Comment author: JohnBuridan 27 March 2015 05:51:29AM 1 point [-]

Helping make more accurate predictions about the future by reducing the “X isn’t allowed to happen” effect (or, as Anna Salamon once put it, “putting X into the realm of the thinkable”).

This point, along with your larger module here, is something, I promote often and has helped me and my friends immensely! Just last night, a distressed friend called me and was having a panic attack about a guest-speaker he had invited to his college campus. He was worried this guest-speaker was going to do something morally questionable at another person's expense publicly and that would reflect badly on him and his organization. The speaker did just such a thing a little over a week ago at an Ivy League, so his fear has a rational trigger. My task was to 1) reinforce to him that he is easily smart enough to deal with these challenges and 2) that in the unlikely event of the worst happening he should be prepared. Eventually we got to imagining the worse case scenario, and he came up with a few precautions and fail-safes to protect himself and others.

I am very proud of him. Moral of the story: don't shut down, don't pigeon-hole yourself, think about what is within your power and what is not, prepare your mind and your world for the coming turbulence.

View more: Next