Comment author: Bugmaster 15 April 2012 08:28:04AM 3 points [-]

I agree with pretty much everything you said (except for the sl4 stuff, because I haven't been a part of that community and thus have no opinion about it one way or another). However, I do believe that LW can be the place for both types of discussions -- outreach as well as technical. I'm not proposing that we set the barrier to entry at zero; I merely think that the guideline, "you must have read and understood all of the Sequences before posting anything" sets the barrier too high.

I also think that we should be tolerant of people who disagree with some of the Sequences; they are just blog posts, not holy gospels. But it's possible that I'm biased in this regard, since I myself do not agree with everything Eliezer says in those posts.

Comment author: Zetetic 15 April 2012 03:25:33PM 2 points [-]

Disagreement is perfectly fine by me. I don't agree with the entirety of the sequences either. It's disagreement without looking at the arguments first that bothers me.

In response to comment by [deleted] on Our Phyg Is Not Exclusive Enough
Comment author: Bugmaster 15 April 2012 02:52:41AM *  0 points [-]

I don't see why having the debate at a higher level of knowledge would be a bad thing.

Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:

Imagine watching a debate between some uneducated folks about whether a tree falling in a forest makes a sound or not. Not very interesting.

No, it's not very interesting to you and me, but to the "uneducated folks" whom you dismiss so readily, it might be interesting indeed. Ignorance is not the same as stupidity, and, unlike stupidity, it's easily correctable. However, kicking people out for being ignorant does not facilitate such correction.

The point of my post was that that is not an acceptable solution.

What's your solution, then ? You say,

I for one would like us to all enforce a little more strongly that people read the sequences and even agree with them in a horrifying manner. You don't have to agree with me, but I'd just like to put out there as a matter of fact that there are some of us that would like a more exclusive LW.

To me, "more exclusive LW" sounds exactly like the kind of solution that doesn't work, especially coupled with "enforcing a little more strongly that people read the sequences" (in some unspecified yet vaguely menacing way).

Comment author: Zetetic 15 April 2012 07:56:17AM 2 points [-]

Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:

Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before they try to debate a topic, especially when we have people going through the trouble of maintaining a wiki that catalogs relevant ideas and opinions that have already been expressed here. If people aren't willing or able to pick up the basic opinions already out there, they will almost never be able to bring anything of value to the conversation. Especially on topics discussed here that lack sufficient public exposure to ensure that at least the worst ideas have been weeded out of the minds of most reasonably intelligent people.

I've participated in a lot of forums (mostly freethough/rationality forums), and by far the most common cause of poor discussion quality among all of them was a lack of basic familiarity with the topic and the rehashing of tired, old, wrong arguments that pop into nearly everyone's head (at least for a moment) upon considering a topic for the first time. This community is much better than any other I've been a part of in this respect, but I have noticed a slow decline in this department.

All of that said, I'm not sure if LW is really the place for heavily moderated, high-level technical discussions. It isn't sl4, and outreach and community building really outweigh the more technical topics, and (at least as long as I've been here) this has steadily become more and more the case. However, I would really like to see the sort of site the OP describes (something more like sl4) as a sister site (or if one already exists I'd like a link). The more technical discussions and posts, when they are done well, are by far what I like most about LW.

In response to against "AI risk"
Comment author: AlphaOmega 12 April 2012 05:48:26PM *  -1 points [-]

I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemonstrated by our technology, I would further assert that unfriendly AI is pure science fiction which should be far down the list of our concerns compared to more clear and present dangers.

Comment author: Zetetic 13 April 2012 05:39:27AM 1 point [-]

I'm going to assert that it has something to do with who started the blog.

Comment author: lukeprog 10 April 2012 12:32:15AM 2 points [-]

Hours currently tracked on the honor system; but it's all pretty visible work, so if 2 hours are logged but I don't see any changes to the Google doc where the researcher is tracking their research efforts, I'll have questions.

Work can be done during any hours of the day. Almost all correspondance is by email.

The sample list of subjects is even broader than all the subjects mentioned someone on this page.

Comment author: Zetetic 10 April 2012 01:27:21AM 1 point [-]

The sample list of subjects is even broader than all the subjects mentioned someone on this page.

In that case I'm a bit unclear about the sort of research I'd be expected to do were I in that position. Most of those subjects are very wide open problems. Is there an expectation that some sort of original insights be made, above and beyond organizing a clear overview of the relevant areas?

Comment author: Zetetic 09 April 2012 10:02:44PM 5 points [-]

I think it might help if you elaborate on the process some: How are hours tracked? Is it done by the honor system or do you have some software? Will I need to work at any specific times of the day, or do I just need to be available for at least 20 hours? Is there a sample list of subjects?

Either way, I'll probably send in an application and go from there. I currently tutor calculus online for approximately the same pay, but this seems somewhat more interesting.

Comment author: Sniffnoy 29 March 2012 06:48:39AM 1 point [-]

Link nitpick: When linking to arXiv, please link to the abstract, not directly to the PDF.

Comment author: Zetetic 29 March 2012 05:21:50PM 0 points [-]

Fixed

Comment author: Zetetic 28 March 2012 11:28:40PM *  0 points [-]

I posted this article to the decision theory group a moment ago. It seems highly relevant to thinking concretely about logical uncertainty in the context of decision theory, and provides what looks to be a reasonable metric for evaluating the value of computationally useful information.

ETA: plus there is an interesting tie-in to cognitive heuristics/biases.

In response to Defeating Ugh Ideas
Comment author: Dmytry 25 March 2012 10:10:10AM *  0 points [-]

I think the converse is also true: the LW community has an ugh field around ideas that are generally acceptable and reasoned about elsewhere. For example, the issues related to AI risks. Fear is a powerful ugh field generator.

In response to comment by Dmytry on Defeating Ugh Ideas
Comment author: Zetetic 26 March 2012 12:08:07AM *  1 point [-]

The original article and usual use of "Ugh Field" (in the link at the top of the post) is summariezed as:

Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.

I agree that LW has Ugh Fields, but I can't see how AI risks is one. There may be fear associated with AI risks here but that is specifically because it is a major topic of discussion here. Fear may impede clear thinking, sure, but this particular case doesn't seem to fit into the notion of Ugh Field.

I think the confusion stems from the definition in the post being much too loose:

Ugh Fields are internal negative reactions that occur before the conscious mind has an opportunity to process the information, often resulting in less than optimal decision making.

If you want to take a look at possible LW Ugh Fields, I'd take a look at user:Will_Newsome's posts.

Comment author: Vaniver 23 March 2012 07:46:25PM 17 points [-]

Or if I were to try the existence of God, I predict half the population would say it's 100% certain God exists, and the other half would say the opposite.

That sounds like a massive overestimate of the percentage of atheists.

Comment author: Zetetic 23 March 2012 10:04:22PM *  7 points [-]

Not to mention a massive underestimation of intermediate positions, e.g. the doubting faithful, agnostics, people with consciously chosen, reasonable epistemology etc. This sets that number to 0. I've met plenty of more liberal theists that didn't assert 100% certainty.

Comment author: [deleted] 22 March 2012 11:18:31PM 4 points [-]

It comes from his 7 point scale for measuring belief along the theist/atheist spectrum.

In response to comment by [deleted] on Best shot at immortality?
Comment author: Zetetic 23 March 2012 09:38:31PM *  3 points [-]

That makes sense. It still seems to be more of a rhetorical tool to illustrate that there is a spectrum of subjective belief. People tend to lump important distinctions like these together: "all atheists think they know for certain there isn't a god" or "all theists are foaming at the mouth and have absolute conviction", so for a popular book it's probably a good idea to come up with this sort of scale like this, to encourage people to refine their categorization process. I kind of doubt that he meant it to be used as a tool for inferring Bayesian confidence (in particular, I doubt 6.9 out of 7 is meant to be fungible with P(god exists) = .01428).

View more: Prev | Next