Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: D_Malik 08 April 2014 07:05:35PM *  15 points [-]

Should we listen to music? This seems like a high-value thing to think about.* Some considerations:

  • Music masks distractions. But we can get the same effect through alternatives such as white noise, calming environmental noise, or ambient social noise.

  • Music creates distractions. It causes interruptions. It forces us to switch our attention between tasks. For instance, listening to music while driving increases the risk of accidents.

  • We seem to enjoy listening to music. Anecdotally, when I've gone on "music fasts", music starts to sound much better and I develop cravings for music. This may indicate that this is a treadmill system, such that listening to music does not produce lasting improvements in mood. (That is, if enjoyment stems from relative change in quality/quantity of music and not from absolute quality/quantity, then we likely cannot obtain a lasting benefit.)

  • Frequency of music-listening correlates (.18) with conscientiousness. I'd guess the causation's in the wrong direction, though.

  • Listening to random music (e.g. a multi-genre playlist on shuffle) will randomize emotion and mindstate. Entropic influences on sorta-optimized things (e.g. mindstate) are usually harmful. And the music-listening people do nowadays is very unlike EEA conditions, which is usually bad.

(These are the product of 30 minutes of googling; I'm asking you, not telling you.)

Here are some ways we could change our music-listening patterns:

  • Music modifies emotion. We could use this to induce specific useful emotions. For instance, for productivity, one could listen to a long epic music mix.

  • Stop listening to music entirely, and switch to various varieties of ambient noise. Moderate ambient noise seems to be best for thinking.

  • Use music only as reinforcement for desired activities. I wrote a plugin to implement this for Anki. Additionally, music benefits exercise, so we might listen to music only at the gym. The treadmill-like nature of music enjoyment (see above) may be helpful here, as it would serve to regulate e.g. exercise frequency - infrequent exercise would create music cravings which would increase exercise frequency, and vice versa.

  • Listen only to educational music. Unfortunately, not much educational music for adults exists. We could get around this by overlaying regular music with text-to-speeched educational material or with audiobooks.

* I've been doing quantitative attention-allocation optimization lately, and "figure out whether to stop listening to music again" has one of the highest expected-utilons-per-time of all the interventions I've considered but not yet implemented.

Comment author: VincentYu 08 April 2014 10:22:07PM 10 points [-]

I went through the literature on background music in September 2012; here is a dump of 38 paper references. Abstracts can be found by searching here and I can provide full texts on request.

Six papers that I starred in my reference manager (with links to full texts):

One-word summary of the academic literature on the effects of listening to background music (as of September 2012): unclear

Comment author: John_Maxwell_IV 31 March 2014 02:16:11AM 0 points [-]

"Can cognitive restructuring reduce the disruption associated with perfectionistic concerns?" http://www.sciencedirect.com/science/article/pii/S0005789401800514

Comment author: VincentYu 31 March 2014 03:44:01AM 0 points [-]
Comment author: pianoforte611 30 March 2014 06:57:19PM *  3 points [-]

Am I confused about frequentism?

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis (which is the complement of the hypothesis that you are trying to test).

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

But it seems more useful to me to calculate P(hypothesis | data). And that's not quite the same thing.

So what I'm wondering is whether under frequentism P(hypothesis | data) is actually meaningless. The hypothesis is either true or false and depending on whether its true or not the data has a certain propensity of turning out one way or the other. Its meaningless to ask what the probability of the hypothesis is, you can only ask what the probability of obtaining your data is under certain assumptions.

Comment author: VincentYu 30 March 2014 09:06:44PM *  4 points [-]

I'm currently learning about hypothesis testing in my statistics class. The idea is that you perform some test and you use the results of that test to calculate:

P(data at least as extreme as your data | Null hypothesis)

This is the p-value. If the p-value is below a certain threshold then you can reject the null hypothesis.

This is correct.

Put another way:

P(data | hypothesis) = 1 - p-value

and if 1 - p-value is high enough then you accept the hypothesis. (My use of "data" is handwaving and not quite correct but it doesn't matter.)

This is not correct. You seem to be under the impression that

P(data | null hypothesis) + P(data | complement(null hypothesis)) = 1,

but this is not true because

  1. complement(null hypothesis) may not have a well-defined distribution (frequentists might especially object to defining a prior here), and
  2. even if complement(null hypothesis) were well defined, the sum could fall anywhere in the closed interval [0, 2].

More generally, most people (both frequentists and bayesians) would object to "accepting the hypothesis" based on rejecting the null, because rejecting the null means exactly what it says, and no more. You cannot conclude that an alternative hypothesis (such as the complement of the null) has higher likelihood or probability.

Comment author: Alicorn 15 March 2014 06:01:13AM 2 points [-]

What are the use cases of a ring signature? What does Alice hope to accomplish by arranging for only Bob to be able to verify a thing?

Comment author: VincentYu 15 March 2014 06:26:00AM *  10 points [-]

It's intended for situations where Alice and Bob wish to communicate digitally with some degree of privacy and authenticity. At the moment, using software like GnuPG (which is a free and open source implementation of OpenPGP), Alice can encrypt her messages to Bob to provide privacy, and she can also digitally sign her messages to provide authenticity (i.e., a cryptographic proof that she wrote the message).

Unfortunately, using such a system, Alice might worry that Bob could show others her messages and signatures, thus proving to others that she wrote and signed them. Such worries are not unusual if someone goes to the trouble of encrypting and signing their messages. For instance, if Alice is in a price negotiation with Bob, she might not want Bob to be able to prove to competitors that she offered a certain price.

By using a ring signature instead of a standard signature, Alice can alleviate such worries. When she signs a message using a ring signature, Bob can be sure that Alice signed the message, but it only proves to everyone else that either Alice or Bob signed the message. Thus Bob is unable to prove to others what Alice said to him, because he could have forged the ring signature himself!

Of course, Bob can still tell everyone what Alice said—using ring signatures instead of standard signatures only removes his ability to cryptographically prove to everyone what Alice said. This becomes a "he said she said" situation rather than a "she said this and here's the proof" situation. From Alice's point of view, the first situation is better than the second, so she would prefer to use ring signatures.

More generally, I hope for ring signatures to become easy enough to use that a typical user can use it for normal emails. Right now, it is not advisable for the average user to sign their emails simply because it is often a bad idea to give recipients the ability to prove to others what they said.

Comment author: VincentYu 15 March 2014 04:46:30AM *  14 points [-]

Since December, I have been working on a proposed extension to OpenPGP to add support for some sort of designated verifier signature. Most of this work involved doing a literature review and carefully reading the OpenPGP specification to see what cryptographic schemes can be implemented successfully in OpenPGP. I settled on a ring signature scheme by Abe et al. (2004), and recently posted a proposal to the OpenPGP mailing list. Note that this is a proposal for a draft of a proposed standard, so things remain very flexible. This is the best time for anyone to comment on the proposal, before I write up an I-D.

The following is a snippet of the description of ring signatures that I gave in my overview of the proposal:

Ring signatures allow a signer to specify a set of possible signers without revealing who actually signed the message.

The current OpenPGP specification lacks a straightforward way for Alice to send Bob an authenticated message that can only be verified by Bob. Right now, if such a signature is desired, the best that Alice can do is to send Bob a signed-then-encrypted message. Unfortunately, this means that Bob can show others the decrypted signature to prove to others that Alice signed the message. This ability of Bob to transfer Alice's signature to others is often undesirable from Alice's point of view.

Ring signatures provide an easy way for Alice to authenticate a message to Bob without concomitantly giving Bob the ability to transfer her signature. Alice simply creates a ring signature that specifies Alice and Bob as the possible signers of a message. Upon receipt of the message and ring signature, Bob is assured that the message is signed by Alice, but he cannot prove to others that Alice signed the message because Bob could have created the ring signature himself.

I am interested to hear from the perspective of end users. For instance:

  • Is there anything puzzling in my description of ring signatures?
  • Are there any particular features that you desire?

Of course, I would also like to hear from anyone who has the time to go through the details of the proposal. Sections 3 and 4 contain the bulk of the actual cryptographic scheme, and it closely follows Abe et al.'s scheme with only small modifications to better fit OpenPGP. Some resources:

  • Ring signatures are interesting by themselves, and I think some Less Wrong readers might enjoy reading the following papers. The standard reference for ring signatures is Rivest et al. (2006); I would recommend reading this paper first if you plan on reading Abe et al.'s paper, because it contains a very clear exposition of the ideas behind ring signatures. A third paper that I recommend is Bresson et al. (2002); Sections 1–3 are particularly useful.

  • The current OpenPGP specification is given in RFC 4880. I don't recommend reading it unless you have some burning desire to learn more about OpenPGP; it's not particularly illuminating.

In the best case scenario, ring signatures get deployed on GnuPG within a few years and Alice can create a ring-signed message for Bob simply by running: gpg2 --clearsign --ring-signature --recipient Bob

(In the not-quite-worst case scenario, I write the I-D, everyone agrees it's crap, and I vow never to touch math again.)

What are you working on? March / April 2014

3 VincentYu 15 March 2014 03:45AM

This is the supposedly-bimonthly-but-we-keep-skipping 'What are you working On?' thread. Previous threads are here. So here's the question:

What are you working on? 

Here are some guidelines:

  • Focus on projects that you have recently made progress on, not projects that you're thinking about doing but haven't started.
  • Why this project and not others? Mention reasons why you're doing the project and/or why others should contribute to your project (if applicable).
  • Talk about your goals for the project.
  • Any kind of project is fair game: personal improvement, research project, art project, whatever.
  • Link to your work if it's linkable.
Comment author: Pablo_Stafforini 04 March 2014 01:55:27AM 0 points [-]

Ubel, P. A., DeKay, M. L., Baron, J., & Asch, D. A. (1996). Public preferences for efficiency and racial equity in kidney transplant allocation decisions. Transplantation Proceedings, 28, 2997–3002.

Comment author: VincentYu 06 March 2014 02:08:33AM 3 points [-]
Comment author: Pablo_Stafforini 04 March 2014 01:55:27AM 0 points [-]

Ubel, P. A., DeKay, M. L., Baron, J., & Asch, D. A. (1996). Public preferences for efficiency and racial equity in kidney transplant allocation decisions. Transplantation Proceedings, 28, 2997–3002.

Comment author: VincentYu 04 March 2014 05:58:07AM 1 point [-]

I've requested a scan from my university's medical library.

Comment author: badger 25 February 2014 10:08:11PM 4 points [-]

Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.

Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?

Comment author: VincentYu 26 February 2014 12:39:44AM 2 points [-]

There are some techniques that can be used with simulated annealing to deal with noise in the evaluation of the objective function. See Section 3 of Branke et al (2008) for a quick overview of proposed methods (they also propose new techniques in that paper). Most of these techniques come with the usual convergence guarantees that are associated with simulated annealing (but there are of course performance penalties in dealing with noise).

What is the dimensionality of your parameter space? What do you know about the noise? (e.g., if you know that the noise is mostly homoscedastic or if you can parameterize it, then you can probably use this to push the performance of some of the simulated annealing algorithms.)

Comment author: gwern 20 February 2014 07:04:45PM 2 points [-]

"Peering Into Peer Review", 2014 (linked from http://pipeline.corante.com/archives/2014/02/20/the_nih_takes_a_look_at_how_the_moneys_spent.php and apparently the fulltext is at http://sciencemag.lovefuck.me/content/343/6171/596.short - no, I don't understand either why Science is now being hosted on lovefuck.me either. There are many things in this world I don't understand.)

Comment author: VincentYu 21 February 2014 10:25:58AM 5 points [-]

Here.

apparently the fulltext is at http://sciencemag.lovefuck.me/content/343/6171/596.short - no, I don't understand either why Science is now being hosted on lovefuck.me either. There are many things in this world I don't understand.

I took a look. That domain is acting as a 2-hop open web proxy. The first hop routes through a VPS in New York. The second hop routes through a (dedicated?) server in Montreal. The New York VPS is running nginx as a reverse proxy with no caching. The Montreal server is running Mr9.SM, which looks like an online fraud toolkit built on top of a web proxy back end. On the same server, there is an exposed MongoDB interface that is leaking data that should not be leaked.

There are other domains that are also using these two servers as a 2-hop open web proxy (the domains and two servers are most likely managed by the same entity, since the setup requires coordination). A small sample (rot13ed to stop bots from picking them up):

  • nqbivan.pbz
  • pbaqbefubrf.pbz
  • qbjaybnqjvaqbjf8.zr
  • serrcfq.zr
  • ccggrzcyngrf.zr
  • jvaqbjfqbjaybnq.zr

Strangely, I wasn't able to figure out what fraud is being attempted. I expected to see cookie stuffing, but there was no sign of that—the only thing added to each page is a javascript StatCounter tracker. Phishing seems unlikely given the rather conspicuous domain names... but I suppose many users still fall for that. I don't understand why a 2-hop design is used, especially since nginx is not being used to cache anything. If anyone figures out what's going on, I would be very interested to hear about it.

View more: Next