There's a discussion post that mentions the fundraiser here, along with other news: http://lesswrong.com/r/discussion/lw/o0d/miri_ama_plus_updates/
I see. It seemed to me that it was about the experimental method which did not fit to a mathematical statement. I understand the possibility of being mistaken. I was mistaken many times, I am not sure with some proofs and I know some persuasive fake proofs... Despite this, I am not very convinced that I should do such things with my probability estimates. After all, it is just an estimate. Moreover it is a bit self-referencing when the estimate uses a more complicated formula then the statement itself. If I say that I am 1-sure, that 1 is not 1/2, it is safe, isn't it? :-D Well, it does not matter :-) I think that I got the point, "I know that I know nothing" is a well known quote.
I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.
If anyone is still interested, I've since spun this into a startup called Guesstimate.
https://github.com/getguesstimate/guesstimate-app
http://effective-altruism.com/ea/rv/guesstimate_an_app_for_making_decisions_with/
As a general query to other readers: Is it bad form to just ignore comments like this? I'm apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin's Law.
In general you can ignore comments when you don't like a productive discussion will follow.
LW by it's nature has people who argue a wide array of positions and in a case like this you will get some criticism like this. Don't let that turn you off LW or take it as suggestion that your views are unwelcome here.
We're not talking about all of science. (Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand.) We're talking about whether or not anthropic reasoning tells us to expect to see people building the LHC, at a cost of $1 billion per year.
Thatcher apparently rejected the idea as presented, and rightly too if the Internet accurately reported the pitch they made to her. (In this popular account, the Higgs mechanism doesn't "explain mass," it replaces one arbitrary number with another! I still don't know the actual reasons for believing in it!) So we don't need to imagine humanity dying out, and we don't need to assume that civilization collapses after using up irreplaceable fossil fuels. (Though that one seems somewhat plausible.) I don't think we even need to assume religious tyranny crushes respect for science. Slightly less radical changes to the culture of a small fraction of the world seem sufficient to prevent the LHC expenditure for the foreseeable future. Add in uncertainty about various risks that fall short of total annihilation, and this certainty starts to look ridiculous.
Now as I said, one could make a different anthropic argument based on population in various 'worlds'. But as I also said, I don't think we know enough to get a high probability from that either.
In these spheres people generally understand that heuristics optimize for something. Frequently people think they optimize for some ancestral environment that's quite unlike the world we are living in at the moment. I think that's a question where a well written post would be very useful.
This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any "reticle adjustment" as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made.
I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don't interact much with Blacks. If the adjustment was made during a time where the person was at an all-White school, the interesting question isn't whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.
That sounds like "thesis is true" or "thesis is not true" are reasonable positions. Bayesian beliefs have probabilities attached to them.
Sometimes, even people who understand Bayesian reasoning use idiomatic phrases like "believe is true" as a convenient shorthand for "assign a high probability to"! I can see how that might be confusing!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)