Comment author: handoflixue 06 November 2012 12:42:50AM 2 points [-]

"First we have to acknowledge that we are talking about differences between people with comparable levels of expertise"

The assertion that the vast majority of voters have done a sizeable amount of research, rather than simply voting "along party lines" or "like mom always did" or "because dad was overcontrolling and I'm not going to support HIS party" strikes me as the sort of assertion that would require quite a lot of evidence.

One can reasonably conclude that in politics, as with math, the "average person" is ignorant and their opinion is not based on any sort of expertise.

Comment author: ricketson 14 February 2013 06:08:55AM 0 points [-]

"One can reasonably conclude that in politics, as with math, the "average person" is ignorant and their opinion is not based on any sort of expertise."

Even if you limit the population to those who are well informed, that population is still rather evenly split and so his points still hold.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: ricketson 09 February 2013 08:38:45PM *  2 points [-]

Good points, but it was inappropriate to question the author's motives and the attacks on the SI were off-topic.

Comment author: fela 09 February 2013 06:47:15PM 15 points [-]

Jared Diamond, in Guns Germs and Steel, argues that when the time is ripe scientific discoveries are made quite regardless of who makes them, give or take a few decades. Most discoveries are incremental, and many are made by multiple people simultaneously. So wouldn't a discovery that isn't published be just made elsewhere in a few years time, possibly by someone without many ethical concerns?

Comment author: ricketson 09 February 2013 08:26:58PM 2 points [-]

Especially in the modern environment with many thousands of scientists, there won't be much delay caused by a few scientists witholding their results. The greatest risk is that the discovery is made by someone who will keep it secret in order to increase their own power.

There is also a risk that keeping secrets will breed mistrust, even if the secret is kept without evil intent.

Comment author: Eugine_Nier 05 November 2012 03:33:25AM 4 points [-]

Roughly half of the population is misinformed about which alternative is objectively better. In that case, how do I justify a belief that I have a greater than 50% chance of being right, when everyone else has access to the same information?

Well, you can replace "which alternative is objectively better" with any other belief on which opinions differ and the same argument applies.

Comment author: ricketson 05 November 2012 05:01:19AM 0 points [-]

"any other belief"

This invites us to look at why beliefs differ. First we have to acknowledge that we are talking about differences between people with comparable levels of expertise, so this isn't the same as the disagreements that exist between experts and novices.

For elections, I think we can say that people disagree in large part because the situation is incredibly complicated. It it hard to know how government policies will affect human welfare, and it is hard to know how elected officials will shape government policy.

The only interesting factor that I can think of is differences in our scope of altruism -- one voter may feel altruistic towards their city, while another focuses on the nation, and a third focuses on all of humanity.

In response to How to Fix Science
Comment author: ricketson 04 March 2012 05:52:08AM 2 points [-]

Thanks for putting this together. There are many interesting links in there.

I am hopeful that Bayesian methods can help to solve some of our problems, and there is constant development of these techniques in biology.

Scientists should pay more attention to their statistical tests, and I often find myself arguing with others when I don't like their tests. The most important thing that people need to remember is what "NHST" actually does -- it rejects the null hypothesis. Once they think about what the null hypothesis is, and realize that they have done nothing more than reject it, they will make a lot of progress.

In response to How to Fix Science
Comment author: Nicholas_Covington 03 March 2012 10:05:08PM 1 point [-]

This, I think, is just one symptom of a more general problem with scientists: they don't emphasize rigorous logic as much as they should. Science, after all, is not only about (a) observation but about (b) making logical inferences from observation. Scientists need to take (b) far more seriously (not that all don't, but many do not). You've heard the old saying "Scientists make poor philosophers." It's true (or at least, true more often than it should be). That has to change. Scientists ought to be amongst the best philosophers in the world, precisely because they ought to be masters of logic.

Comment author: ricketson 04 March 2012 05:45:42AM 1 point [-]

Saying that people should be better is not helpful. Like all people, scientists have limited time and need to choose how to allocate their efforts. Sometimes more observations can solve a problem, and sometimes more careful thinking is necessary. The appropriate allocation depends on the situation and the talents of the researcher in question.

That being said, there may be a dysfunctional bias in how funding is allocated -- creating a "all or none" environment where the best strategy for maintaining a basic research program (paying for one's own salary plus a couple of students) is to be the type of researcher who gets multi-million dollar grants and uses that money to generate gargantuan new datasets, which can then provide the foundation for a sensational publication that everyone notices.

In response to comment by [deleted] on How to Fix Science
Comment author: ChrisHallquist 04 March 2012 01:34:20AM -2 points [-]

21st century philosophers aren't much different.

Comment author: ricketson 04 March 2012 05:41:02AM 0 points [-]

aoeu

In response to comment by [deleted] on How to Fix Science
Comment author: satt 04 March 2012 01:38:35AM *  6 points [-]

Bayesian methods are better in a number of ways, but ignorant people using a better tool won't necessarily get better results. I don't think the net effect of a mass switch to Bayesian methods would be negative, but I do think it'd be very small unless it involved raising the general statistical competence of scientists.

Even when Bayesian methods get so commonplace that they could be used just by pushing a button in SPSS, researchers will still have many tricks at their disposal to skew their conclusions. Not bothering to publish contrary data, only publishing subgroup analyses that show a desired result, ruling out inconvenient data points as "outliers", wilful misinterpretation of past work, failing to correct for doing multiple statistical tests (and this can be an issue with Bayesian t-tests, like those in the Wagenmakers et al. reanalysis lukeprog linked above), and so on.

In response to comment by satt on How to Fix Science
Comment author: ricketson 04 March 2012 05:25:03AM 9 points [-]

As a biologist, I can say that most statistical errors are just that: errors. They are not tricks. If researchers understand the statistics that they are using, a lot of these problems will go away.

A person has to learn a hell of a lot before they can do molecular biology research, and statistics happens to be fairly low on the priority list for most molecular biologists. In many situations we are able to get around the statistical complexities by generating data with very little noise.

In response to The Allais Paradox
Comment author: Dr._Science 19 January 2008 06:52:41AM 1 point [-]

It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.

Comment author: ricketson 15 January 2012 07:37:24PM 1 point [-]

But such psychological stress arises from your perception of reality. If it is caused by an erroneous perception of reality, then the rational thing to do is correct your perception, not take the error for granted. If you are certain that you made the right decision, then you shouldn't feel stressed when you "lose".

In response to The Allais Paradox
Comment author: ricketson 15 January 2012 07:28:45PM *  0 points [-]

I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found:

1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2.

2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the implicit need to work to earn this money. I think that these assumptions are reasonable because there is essentially no realistic condition in which I would instantly see the results of a decision that might earn me $27,000; there would probably be a delay of several months (if working) or years (if investing) between making the decision and learning whether I got the money or not. This prolonged uncertainty has a negative utility, since I am unable to make firm plans for the money during that interval. This negative utility would apply to all options except 1A. Furthermore, earning $24,000 would realistically require several months of work on my part. However, a project that had a 1/3 chance of paying out $24,000 might only take a month. The implicit difference in opportunity cost between scenario 1 and scenario 2 has implications for the marginal utility of money in each scenario (making me more risk-averse in scenario 1, which implicitly has a higher opportunity cost).

These implicit costs are not specified in this game, so it is technically "irrational" to incorporate them into my decision-making. However, in any realistic scenario, such costs will exist (regardless of what the salesman says), so it is good that I/we intuitively include them in my/our decision-making.

View more: Next