Comment author: ChristianKl 08 October 2016 03:56:38PM *  0 points [-]

Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand

Hakob Barseghyan teaches in his History and Philosophy of Science course that Descartes started it. The hypothetico-deductive method (what's commonly called the scientific method) is a result of the philosophic commitments of Descartes thought.

Comment author: hairyfigment 09 October 2016 12:12:59AM 0 points [-]

The video is somewhat odd in that he claims Descartes had no problem with experiments, but I recall the philosopher proposing rules which contradicted experiments and hand-waving this by appealing to the impossibility of observing bodies in isolation.

In any case, Hakob does make clear that Descartes used a more Aristotelian method as a rhetorical device to persuade Aristotelians. (In effect, he proved the method of intuitive truth unreliable by producing a contradiction.) I don't believe his work includes any workable method you could use to do science, while Newton's rules for natural philosophy seem like an OK approximation.

Comment author: wafflepudding 02 October 2016 09:04:04AM 0 points [-]

Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries.

To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.)

*Not that I'm condoning this idea that Newton started science.

Comment author: hairyfigment 08 October 2016 06:10:59AM 0 points [-]

but these things are ephemeral on the scale of centuries.

That's what I just said. You seem to have an alarming confidence in our ability to bounce back from ephemeral shifts. If there were actually some selection pressure against a completed LHC, then it would take a lot less than a repetition of this to keep us shifted away from building one.

Comment author: ChristianKl 02 October 2016 07:17:02PM *  1 point [-]

Like I just said, modern science started with an extreme outlier.

There's a lot of history of science and it generally doesn't find that it all hinges on one event like Newton.

Comment author: hairyfigment 08 October 2016 05:58:56AM 1 point [-]

We're not talking about all of science. (Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand.) We're talking about whether or not anthropic reasoning tells us to expect to see people building the LHC, at a cost of $1 billion per year.

Thatcher apparently rejected the idea as presented, and rightly too if the Internet accurately reported the pitch they made to her. (In this popular account, the Higgs mechanism doesn't "explain mass," it replaces one arbitrary number with another! I still don't know the actual reasons for believing in it!) So we don't need to imagine humanity dying out, and we don't need to assume that civilization collapses after using up irreplaceable fossil fuels. (Though that one seems somewhat plausible.) I don't think we even need to assume religious tyranny crushes respect for science. Slightly less radical changes to the culture of a small fraction of the world seem sufficient to prevent the LHC expenditure for the foreseeable future. Add in uncertainty about various risks that fall short of total annihilation, and this certainty starts to look ridiculous.

Now as I said, one could make a different anthropic argument based on population in various 'worlds'. But as I also said, I don't think we know enough to get a high probability from that either.

Comment author: [deleted] 07 October 2016 11:23:24PM 0 points [-]

It was poor wording on my part when I wrote "the contexts under which the adjustment was made". The spirit of my point is much better captured by the word "applied" (vs. made). That is, it looks like a balanced reading of stereotype literature shows that people are quite good in their judgments of when to apply a stereotype. My point is therefore a bit more extreme than it might have appeared.

I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don't interact much with Blacks.

I agree with this and would add that such perceptions of superiority could be amplified by other members of the community reinforcing those judgments.

If the adjustment was made during a time where the person was at an all-White school, the interesting question isn't whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.

To get a little deeper into this topic, I should mention that our stereotypes are conditional and, therefore, much of the performance of a stereotype depends on applying it in the proper contexts. Of the studies looking at when people apply stereotypes, they tend to show that they are used as a last resort under conditions in which almost no other information about the target is available. We're surprisingly good at knowing when a stereotype is applicable and seem to have little trouble spontaneously eschewing them when other, more diagnostic information is available.

My off-the-cuff hypothesis about students from an all-white school would be that they would show racial preferences when, say, only shown a picture of a black person. However, ask these students to provide judgments after a 5-minute conversation with a black person or after reviewing a resume (i.e., after giving them loads and loads of information) and race effects will become nearly or entirely undetectable. I don't know of any studies looking at this exactly and urge you to take my hypothesis with a grain of salt, but my larger point is this: You might be surprised.

Comment author: hairyfigment 08 October 2016 12:05:08AM 0 points [-]

So, I'm pretty sure we know that humans have a bias against anyone sufficiently different, and that this evolved before humanity as such. We certainly know that humans will try to rationalize their biases. We also have a great deal of evidence for past failures of scientific racism, which has set my prior for the next such theory very low.

Comment author: skeptical_lurker 05 October 2016 12:54:21PM 0 points [-]

It just has no reason to obey "do what humans mean" unless we program it to do what humans mean.

I'm not disputing that this is also a problem, indeed perhaps a harder problem than figuring out what humans mean. In fact there are many failure modes, I was just wondering why people seem to focus in on specifically the fickle genie failure mode to the exclusion of others.

Comment author: hairyfigment 07 October 2016 11:48:44PM 0 points [-]

You're assuming that "what humans mean" is well-defined. I've seen people criticize the example of an AI putting humans on a dopamine drip, on the grounds that "making people happy" clearly doesn't mean that. But if your boss tells you to 'make everyone happy,' you will probably get paid to make everyone stop complaining. Parents in the real world used to give their babies opium and cocaine; advertisers today have probably convinced themselves that the foods and drugs they push genuinely make people happy. There is no existing mind that is provably Friendly.

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

Comment author: Jiro 09 January 2015 05:34:45PM *  1 point [-]

to take them literally is certainly absurd.

You have more certainty than I do.

It could have been meant literally at some point, and the claim "it is there only as a metaphor" could have been inserted afterwards. If it traces back to a pre-Christian creation myth that got to be part of the Bible as an accident of history, it probably was meant literally at some point, and not just in a "this weird sect takes it literally" way, but in how it was generally understood.

Furthermore, there are other passages in the Bible that are not taken literally now, but were taken literally recently enough for that to have happened within recorded history. People only began to say they shouldn't be taken literally when taking them literally became embarrassing.

Comment author: hairyfigment 07 October 2016 10:46:35PM 0 points [-]

Reply to an old comment about literalism:

Yes, but every version of the Torah we have contains parts from different, incompatible versions of the story. The Redactor who put them together had a clear preference (I think) for the Priestly text, but was willing to include stories that contradicted it (at least as a political compromise).

Comment author: wafflepudding 02 October 2016 01:04:03AM 0 points [-]

Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.

Comment author: hairyfigment 02 October 2016 01:33:56AM 0 points [-]

I am strongly disagreeing with you. The cultures that existed on Earth for tens of millenia or more were recognizably human; one of them built an LHC "eventually", but any number of chance factors could have prevented this. Like I just said, modern science started with an extreme outlier.

Comment author: Lumifer 30 September 2016 08:43:05PM 0 points [-]

That's a loan, not a bet.

Comment author: hairyfigment 30 September 2016 10:34:43PM 0 points [-]

The only form of bet I'll accept if I'm betting that humanity won't exist in 10 years.

"That's the joke" image goes here.

Comment author: ImmortalRationalist 29 September 2016 08:46:57AM 0 points [-]

How is it that Solomonoff Induction, and by extension Occam's Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has "worked" in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?

Comment author: hairyfigment 30 September 2016 08:30:48PM 0 points [-]

See: You only need faith in two things and the comment on the binomial monkey prior (a theory which says that the 'past' does not predict the 'future').

You could argue that there exists a more fundamental assumption, hidden in the supposed rules of probability, about the validity of the evidence you're updating on. Here I can only reply that we're trying to explain the data regardless of whether or not it "is true," and point to the fact that you're clearly willing to act like this endeavor has value.

Comment author: ike 30 September 2016 02:50:30PM *  0 points [-]

I will offer you a bet at any odds you want that humanity will still be around in 10 years.

See http://lesswrong.com/lw/ie/the_apocalypse_bet/

Comment author: hairyfigment 30 September 2016 08:00:39PM 0 points [-]

OK, give me US $1000 now and I promise to pay you back $1000.01 in ten years.

View more: Prev | Next