Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by on The mystery of Brahms
Comment author: 22 October 2015 07:27:17PM *  0 points [-]

I'm not familiar with Schumann. Googling music theory forums indicates he's respected today mainly for his compositions for piano, while his symphonies are held in low regard. I'm now listening to his piano concerto in A minor, finished in 1845. I'm not very familiar with the 1830s or 1840s, but it doesn't sound anything like music from the 1820s or earlier.

I don't think music like this could have been written before Schumann. The piano necessary to play it didn't exist. Some key moments in the development of the piano:

The music forums say Liszt and Ravel's works required the double escapement, and some say Chopin's did, while one argues Chopin's pianos didn't have it. No word on Schumann.

In response to comment by on The mystery of Brahms
Comment author: 28 October 2015 10:39:43PM 1 point [-]

Seconding all of gjm's criticisms, and adding another point.

The sostenuto (middle) pedal was invented in 1844. The sustain (right) pedal has been around roughly as long as the piano itself, since piano technique is pretty much unthinkable without it.

Comment author: 14 May 2015 02:58:08PM 1 point [-]

I don't see why adding +1 to all responses would make any difference to any of the comparisons; it shifts all datapoints equally. (And anyway, `log1p(0) ~> 0`. The point of using log1p is simply to avoid `log(0)` ~> -Inf`.)

Comment author: 15 May 2015 05:23:22PM -1 points [-]

The explanation by owencb is what I was trying to address. To be explicit about when the offset is being added, I'm suggesting replacing your `log1p(x) ≣ log(1 + x)` transformation with `log(c + x)` for `c`=10 or `c`=100.

If the choice of log-dollars is just for presentation, it doesn't matter too much. But in a lesswrong-ish context, log-dollars also have connotations of things like the Kelly criterion, where it is taken completely seriously that there's more of a difference between \$0 and \$1 than between \$1 and \$3^^^3.

Comment author: 14 May 2015 04:38:11AM 4 points [-]

Given that at least 25% of respondents listed \$0 in charity, the offset you add to the charity (\$1 if I understand `log1p` correctly) seems like it could have a large effect on your conclusions. You may want to do some sensitivity checks by raising the offset to, say, \$10 or \$100 or something else where a respondent might round their giving down to \$0 and see if anything changes.

In response to comment by on May 2015 Media Thread
Comment author: 03 May 2015 06:24:45AM *  3 points [-]

"Civilized Life in the Universe". George Basalla.

A study of the history of the idea of intelligent extraterrestrial life, and how our [the European diaspora's] thoughts of it have never had much to do with extraterrestrials and instead have everything to do with ourselves. The notion is dissected for all its parts to be seen.

In the 1500s, the notion that the Earth and other planets were made of similar stuff lead to the supposition that if that is the case, perhaps they were inhabited too. The hot question was if Jesus also saved them or if we needed to send missionaries.

Percival Lowell at the turn of the 20th century thought he saw canals all over mars, and talked about how this indicated they had reached a socialist utopia.

Carl Sagan, perhaps steeped in the Barsoom books in his youth, held onto the notion of macroscopic living things on mars for quite some time. He also expounded on the idea that old civilizations might teach us how to avoid nuclear war.

These days, we talk about technological progress, the questionable assumption that it continues without bound in all cases but extinction, and 'where are they?"

It's never been about them. It's always about us - what we care about at that particular moment.

In response to comment by on May 2015 Media Thread
Comment author: 06 May 2015 04:33:51PM 1 point [-]

Curtis Yarvin, who looked to Mars for tips and tricks on writing a "tiny, diamond-perfect kernel" for a programming environment.

Comment author: 09 April 2015 08:03:03AM *  2 points [-]

Troubleshooting is a great example where a little probability goes a long way, thanks.

Amusingly, there was in fact an error in the GRE Subject test I once took, long ago (in computer science). All of the 5 multiple choice answers were incorrect. I agree that conditional on disagreement between test and testtaker, the test is usually right.

Comment author: 10 April 2015 01:42:16PM 1 point [-]

The Rasch model does not hate truth, nor does it love truth, but the truth if made out of items which it can use for something else.

Comment author: 10 March 2015 09:22:41PM 2 points [-]

a standard diagnostic Charm showed Miss Granger as a healthy unicorn

Charms to detect active magic have each time detected her as being in the process of transforming into another shape

He performed certain spells ... declared that Hermione's soul was in healthy condition but at least a mile away from her body

The first two diagnostics are correct. If the third one is correct too, then Hermione is a perfect philosophical zombie now.

Comment author: 11 March 2015 12:06:08AM 1 point [-]

Not, it's much more akin to Dennett's "Where Am I?" or to becoming meguca.

Comment author: 20 February 2015 10:29:29PM 15 points [-]

I wonder if the final room is not visible on the Marauder's Map because it's warded or because the room you enter is determined by whether/how the potion is flawed.

As a veteran Potion's professor, Snape would be able to predict very accurately the way a first year would screw up such a fiddly task. Screw it up in the right way, see an innocuous final room with a little "Well done, don't spoil it!" from the Headmaster. Execute it perfectly and trigger... what exactly?

Comment author: 24 February 2015 08:46:01PM 2 points [-]

This seems like a good occasion to quote the twist reveal in Orson Scott Card's Dogwalker:

We stood there in his empty place, his shabby empty hovel that was ten times better than anywhere we ever lived, and Doggy says to me, real quiet, he says, "What was it? What did I do wrong? I thought I was like Hunt, I thought I never made a single mistake in this job. in this one job."

And that was it, right then I knew. Not a week before, not when it would do any good. Right then I finally knew it all, knew what Hunt had done. Jesse Hunt never made mistakes. But he was also so paranoid that he haired his bureau to see if the babysitter stole from him. So even though he would never accidentally enter the wrong P-word, he was just the kind who would do it on purpose. "He doublefingered every time," I says to Dog. "He's so damn careful he does his password wrong the first time every time, and then comes in on his second finger."

"So one time he comes in on the first try, so what?" He says this because he doesn't know computers like I do, being half-glass myself.

"The system knew the pattern, that's what. Jesse H. is so precise he never changed a bit, so when we came in on the first try, that set off alarms. It's my fault, Dog. I knew how crazy paranoidical he is, I knew that something was wrong, but not till this minute I didn't know what it was. I should have known it when I got his password, I should have known. I'm sorry, you never should have gotten me into this, I'm sorry, you should have listened to me when I told you something was wrong. I should have known, I'm sorry."

Comment author: 14 January 2015 11:42:22PM 0 points [-]

This seems cool but I have a nagging suspicion that this reduces to greater generality and a handful of sentences if you use conditional expectation of the utility function and the Radon-Nikodym theorem?

Comment author: 02 December 2014 05:35:34AM *  1 point [-]

Reminder that Weird Sun Twitter exists.

(Edited link because Unit Of Selection is apparently deactivated)

Comment author: 04 December 2014 03:47:40PM 1 point [-]

Noun phrases that are insufficiently abstract.

Comment author: 18 November 2014 09:40:12AM 2 points [-]

Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:

(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models

(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks

(3) Google: A Neural Image Caption Generator

[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventions are made independently and more or less simultaneously by multiple scientists and inventors.

Comment author: 18 November 2014 02:31:56PM 6 points [-]

How meaningful is the "independent" criterion given the heavy overlaps in works cited and what I imagine must be a fairly recent academic MRCA among all the researchers involved?

View more: Next