Install a smoke detector (and reduce mortality by 0.3% if I'm reading the statistics right - not to talk of the property damages prevented).
Out of curiosity, what's the greatest number of significant digits that you've ever memorized, in any time frame?
Also 10 (EDIT: 20) (random) decimal digits is about 70 bits of entropy, which is an extraordinarily strong password and borders on being a usable cryptographic key (not for long-term safety against high-resource opposition, but well out of "easily brute-forced by a modern computer" territory). Do you use the same kind of memorization you did here for passwords? I can (and do) memorize passwords longer than 20 characters, but I don't really count that because I generate the mnemonic first and then the password from it. Memorizing the password doesn't take long, but sometimes getting the mnemonic into my head does...
I use multiple passwords of consisting of 12 elements of a..z, A..Z, 0..9, and ~20 symbol characters, generated randomly. Total entropy of these is around 76 bits.
10 decimal digits is actually more like 33 bits of entropy.
Feminists believe that women are paid less than men for no good economic reason. If this is the case, feminists should invest in companies that hire many women, and short those which hire few women, to take advantage of the cheaper labour costs.
I suspect that the effect, if real, is likely small enough to be masked by confounders, like CEO competence, market conditions, various other biases of the executives and the board,random chance etc. I wonder if any statistics exist on the matter.
Can you think of any unusual LW-type beliefs that have strong economic implications (say over the next 1-3 years)?]
Given that MIRI and CFAR are still struggling to get enough funding despite presumably employing the most LW-rational people in the world, I severely doubt that LW rationality has "strong economic implications".
small enough to be masked by confounders There are an extremely large number of companies. Unrelated effects should average out.
Regarding statistics: http://thinkprogress.org/economy/2014/07/08/3457859/women-ceos-beat-stock-market/ links to quite some.
Given identical money payoffs between two options (even when adjusting for non-linear utility of money), choosing the non-ambiguous has the added advantage of giving a limited rationality agent less possible futures to spend computing resources on while the process of generating utility runs.
Consider two options: a) You wait one year and get 1 million dollars. b) You wait one year and get 3 million dollars with 0.5 probability (decided after this year).
If you take option b), depending on the size of your "utils", all planning for after the year must essentially be done twice, once for the case with 3 million dollars available and once for the case without.
I usually take the minutes of the German Pirate Party assemblies. It is non-trivial to transcribe two days of speach alone (and I don't know steno). A better solution is a collaborative editor and multiple people typing while listening to the audio with increasing delay, i.e. one person gets life audio, the next one 20 seconds delay, etc... There is EtherPad, but the web client cannot really handle the 250kB files a full day transcript needs, also two of the persons interested in taking minutes (me included) strongly prefer VIm over a glorified textfield.
Hence: On the 23rd of June I downloaded the VIm source and started implementing collaborative editing. On the 28th and 29th three people used it for hours without major problems (except I initially started the server in a gdb to get a backtrace in case of a crash and the gdb unhelpfully stopped it on the first SIGPIPE - but that was not the fault of my software).
To give you an idea of the complexity of collaborative editing, let me quote Joseph Gentle from http://sharejs.org: "I am an ex Google Wave engineer. Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time." It took me 5 days (and I had a full-day meeting on one of them) to deliver >80% of the goodness. Alone.
while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who's worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It's certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.
These observations might not hold for uploads running on hardware paid for by the company. Which would give a combination of company+upload-tech superior cooperation options compared to current forms of collaboration. Also, company-owned uploads will have most of their social network inside the company as well, in particular not with uploads owned by competitors. Hence the natural group boundary would not be "uploads" versus "normals", but company boundaries.
There should be a step 9, where every potential author is sent the final article and has the option of refusing formal authorship (if she doesn't agree with the final article). Convention in academic literature is that each author individually endorses all claims made in an article, hence this final check.
So... how would I design an exercise to teach Checking Consequentialism?
Divide the group into pairs. One is the decider, the other is the environment. Let them play some game repeatedly, prisoners dilemma might be appropriate, but maybe it should be a little bit more complex. The algorithm of the environment is predetermined by the teacher and known to both of the players.
The decider tries to maximize utilitiy over the repeated rounds, the environment tries to minimise the winnigs of the decider, by using social interaction between the evaluated game rounds, e.g. by trying to invoke all the fancy fallacies you outlined in the post or convincing the decider that the environment algorithm actually results in a different decision. By incorporating randomness into the environment algorithm, this might even be used to train consequentialism under uncertainty.
The described effect seems strongly related to the concept of opportunity cost.
I.e. while a bet of yours is still open, the resources spent paying for entering the bet cannot be used again to enter a (better) bet.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Just how bad of an idea is it for someone who knows programming and wants to learn math to try to work through a mathematics textbook with proof exercises, say Rudin's Principles of Mathematical Analysis, by learning a formal proof system like Coq and using that to try to do the proof exercises?
I'm figuring, hey, no need to guess whether whatever I come up with is valid or not. Once I get it right, the proof assistant will confirm it's good. However, I have no idea how much work it'll be to get even much simpler proofs that what are expected of the textbook reader right, how much work it'll be to formalize the textbook proofs even if you do know what you're doing and whether there are areas of mathematics where you need an inordinate amount of extra work to get machine-checkable formal proofs going to begin with.
The Metamath project was started by a person who also wanted to understand math by coding it: http://metamath.org/
Generally speaking, machine-checked proofs are ridiculously detailed. But it being able to create such detailed proofs did boost my mathematical understanding a lot. I found it worthwhile.