Comment author: private_messaging 02 October 2013 07:41:55PM *  2 points [-]

I do think that a signalling model of education

Once again, which education? Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can: a training course for driving trucks. A language course, likewise so. Same goes for mathematics, hard sciences, and engineering disciplines. Which may perhaps be likened to necessity of training for a formula 1 driver, irrespective of the level of innate talent (within the human range of ability).

Now, if that was within the realm of actual science, something like this "signalling model of education" would be immediately invalidated by the truck driving example. No excuses. One can mend it into a "signalling model of some components of education in soft sciences". Where there's a big problem for "signalling" model: a PhD in those fields in particular is a poorer indicator of ability, innate and learned, than in technical fields (lower average IQs, etc), and signals very little.

edit: by the way, the innate 'talent' is not in any way exclusive of importance of learning; some recent research indicates that highly intelligent individuals retain neuroplasticity for longer time, which lets them acquire more skills. Which would by the way explain why child prodigies fairly often become very mediocre adults, especially whenever lack of learning is involved.

Comment author: yli 16 November 2013 07:50:35PM *  0 points [-]

Clearly, a training course for, say, a truck driver, is not signalling, but exactly what it says on the can

If there was a glut of trained truck drivers on the market and someone needed to recruit new crane operators, they could choose to recruit only truck drivers because having passed the truck driving course would signal that you can learn to operate heavy machinery reliably, even if nothing you learned in the truck driving course was of any value in operating cranes.

Comment author: Jayson_Virissimo 06 November 2013 06:41:58AM *  5 points [-]

I'm aware of the existence of the Summa.

And yet, you claim that "philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted" even though the Summa contains refutations of weak arguments for the existence of God. Also, The Church specifically denounced the Doctine of the Double Truth, which by all accounts is a premise that would, in practice, act to protect religious claims from falsification. "Philosophers" would have risked Inquisitional investigation had they not dropped their "half-baked premises they wanted in constructing arguments for the existence of God".

I admit I was mostly thinking of the 17th/18th centuries when I wrote the above paragraph... but it was dangerous to be a heretic in the 13th century too.

I don't think he is claiming it wasn't dangerous to be a heretic in the 13th century. I'm pretty sure he is calling into question the claim that "it was dangerous to question that the existence of God could be proven through reason", which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell. I'm surprized that you are unaware of this given that you "have master's degree in philosophy from Notre Dame".

EDIT: Carinthium beat me to the punch.

Comment author: yli 07 November 2013 03:53:22AM *  1 point [-]

I'm pretty sure he is calling into question the claim that "it was dangerous to question that the existence of God could be proven through reason", which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell

...

This doctrine was supposed (though we don't know if correctly) to be a doctrine that although reason dictated truths contrary to faith, people are obliged to believe on Faith anyway. It was supressed.

Comment author: Will_Newsome 22 March 2013 10:13:15PM *  19 points [-]

(Commenters: talking about the 'supernatural' in terms of metaphysics is metaphysically interesting but phenomenologically speaking it just clouds the issue unnecessarily. The way most people actually use the concept is just 'weird things happening that would require human or transhuman agency, in situations where there's no good reason to suspect human agency'. Talking about reductionism &c. is missing the point---it doesn't matter whether the agency comes from an engineered superintelligence or an "ontologically fundamental" god, what matters is there's non-human agency around. Note that all reports of supernatural phenomena can be explained "naturally" by superintelligences, simulators, highly advanced aliens, &c., all of which seem not-unlikely in a big universe. The improbability stems from the necessity of their having seemingly bizarre motivations; the mechanisms themselves, however, aren't fantastically improbable.)

Comment author: yli 06 November 2013 06:56:33AM *  1 point [-]

I agreed with this at first, but actually, no. Belief in the supernatural doesn't require belief in gods, spirits or any non-human agents. You could just believe that humans have some supernatural abilities like reading each other's minds. When trying to explain these abilties, only reductionists will conclude that there's some third party agent like a simulator setting things up. Non-reductionists will just accept that being able to read minds is part of how this ontologically fundamental mind stuff works.

Comment author: Risto_Saarelma 21 October 2013 11:27:50AM 3 points [-]

What do you think will actually happen, if/when we try to simulate stuff?

I'll tell you what I think won't happen: real feelings, real thoughts, real experiences.

It'll still be pretty cool when the philosophical zombie uploads who act exactly like qualia-carrying humans go ahead and build the galactic supercivilization of trillions of philosophical zombie uploads acting exactly like people and produce massive amounts of science, technology and culture. Most likely there will even be some biological humans around, so you won't even have to worry about nobody ever getting to experience any of it.

Comment author: yli 21 October 2013 12:05:26PM *  5 points [-]

Actually because the zombie uploads are capable of all the same reasoning as M_P, they will figure out that they're not conscious, and replace themselves with biological humans.

On the other hand, maybe they'll discover that biological humans aren't conscious either, they just say they are for reasons that are causally isomorphic to the reasons for which the uploads initially thought they were conscious, and then they'll set out to find a substrate that really allows for consciousness.

Comment author: Risto_Saarelma 23 September 2013 10:44:02AM 4 points [-]

Is anyone doing some sort of polyphasic schedule while going to 9-to-5 day job where napping on the premises isn't practical?

This basically means that you have one nine hours or so block which you need to be awake for, but outside that anything goes. At least the dual-core schedule looks like it could accommodate this, but the article doesn't seem to really describe how they come up with the schedules or how they can be customized.

Comment author: yli 27 September 2013 05:42:57PM *  2 points [-]

Not polyphasic but

Comment author: Douglas_Knight 15 September 2013 03:38:52AM 3 points [-]

Here's an example.

Comment author: yli 15 September 2013 04:01:21AM *  2 points [-]

Thanks for the link. I don't really see creepy cult isolation in that discussion, and I think most people wouldn't, but that's just my intuitive judgment.

Comment author: ChristianKl 14 September 2013 08:26:46PM 2 points [-]

The cults try to get members to sever ties with the family and friends, for example - and this is a filter, most people get creeped out and a few go through with it.

I'm not sure whether that's true. You have people on LessWrong talking about cutting family ties with nonrational family members and nobody get's creeped out.

I don't think I have ever witnessed people getting creeped out by such discussions in the self help area and I think I have frequently heard people encouraging others to cut ties with someone that "holds them back".

Comment author: yli 15 September 2013 01:42:40AM *  5 points [-]

Really? Links? A lot of stuff here is a bit too culty for my tastes, or just embarassing, but "cutting family ties with nonrational family members"?? I haven't been following LW closely for a while now so I may have missed it, but that doesn't sound accurate.

Comment author: yli 15 September 2013 01:16:13AM *  1 point [-]

Reading something for 6 hours spread across 6 days will result in more insight than reading it for 12 hours straight. The better sleep you get the stronger this effect is.* So: do things in parallel instead of serially if possible and take care of your sleep.

* These are just guesses based on my personal experience.

Comment author: yli 31 August 2013 07:31:38PM 4 points [-]

When people talk about the command "maximize paperclip production" leading into the AI tiling the universe with paperclips, I interpret it to mean a scenario where first a programmer comes up with a shoddy formalization of paperclip maximization that he thinks is safe but actually isn't, and then writes that formalization into the AI. So at no point does the AI actually have to try and interpret a natural language command. Genie analogies are definitely confusing and bad to use here because genies do take commands in english.

Comment author: cousin_it 23 June 2013 08:08:13AM *  2 points [-]

I just found this nice quote on The Last Conformer which is supposed to prove that betting on major events is qualitatively different from betting on coinflips:

I wouldn't even offer bets on this kind of probability because that would just invite better informed people to take my money.

It seems to me that the problem exists for coinflips as well. If I flip a coin and don't show you the result, your beliefs about the coin are probably 50/50. But if I offer you to bet at 50/50 odds that the coin came up heads, you'll probably refuse, because I know which way the coin came up and you don't.

According to the Dutch book argument for rationality, we are supposed to accept either side of any bet offered at the odds corresponding to our beliefs. In my example, that idea breaks down, because getting the offer is evidence that you shouldn't take the bet. But then how do we formulate the Dutch book argument?

Comment author: yli 23 June 2013 07:32:09PM 0 points [-]

Omega appears and tells you you've been randomly selected to have the opportunity to take or leave a randomly chosen bet.

View more: Prev | Next