Comment author:pan
11 February 2014 06:29:50PM
13 points
[-]
Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).
Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.
Comment author:gwern
14 February 2014 06:14:09PM
1 point
[-]
I have no idea. The selection isn't the best selection ever (I haven't even heard of some of them), but it can be improved for next time based on this time.
Comment author:Coscott
11 February 2014 09:26:29PM
8 points
[-]
I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.
Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.
Which of the two players has the advantage in this game?
This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.
I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.
Comment author:DanielLC
11 February 2014 09:31:37PM
5 points
[-]
To make sure I understand this correctly: Bob cares about winning, and getting no apples is as good as 3^^^3 apples, so long as he rejects the room with the fewest, right?
Comment author:solipsist
11 February 2014 09:51:36PM
1 point
[-]
A long one-lane, no passing highway has N cars. Each driver prefers to drive at a different speed. They will each drive at that preferred speed if they can, and will tailgate if they can't. The highway ends up with clumps of tailgaters lead by slow drivers. What is the expected number of clumps?
Comment author:Coscott
12 February 2014 12:17:45AM
1 point
[-]
Imagine that you have a collection of very weird dice. For every prime between 1 and 1000, you have a fair die with that many sides. Your goal is to generate a uniform random integer from 1 to 1001 inclusive.
For example, using only the 2 sided die, you can roll it 10 times to get a number from 1 to 1024. If this result is less than or equal to 1001, take that as your result. Otherwise, start over.
This algorithm uses on average 10240/1001=10.228770... rolls. What is the fewest expected number of die rolls needed to complete this task?
When you know the right answer, you will probably be able to prove it.
Comment author:Strilanc
12 February 2014 05:20:14AM
*
1 point
[-]
If you care about more than the first roll, so you want to make lots and lots of uniform random numbers in 1, 1001, then the best die is (rot13'd) gur ynetrfg cevzr va enatr orpnhfr vg tvirf lbh gur zbfg ragebcl cre ebyy. Lbh arire qvfpneq erfhygf, fvapr gung jbhyq or guebjvat njnl ragebcl, naq vafgrnq hfr jung vf rffragvnyyl nevguzrgvp pbqvat.
Onfvpnyyl, pbafvqre lbhe ebyyf gb or qvtvgf nsgre gur qrpvzny cbvag va onfr C. Abgvpr gung, tvira gung lbh pbhyq ebyy nyy 0f be nyy (C-1)f sebz urer, gur ahzore vf pbafgenvarq gb n cnegvphyne enatr. Abj ybbx ng onfr 1001: qbrf lbhe enatr snyy ragveryl jvguva n qvtvg va gung onfr? Gura lbh unir n enaqbz bhgchg. Zbir gb gur arkg qvtvg cbfvgvba naq ercrng.
Na vagrerfgvat fvqr rssrpg bs guvf genafsbezngvba vf gung vs lbh tb sebz onfr N gb onfr O gura genafsbez onpx, lbh trg gur fnzr frdhrapr rkprcg gurer'f n fznyy rkcrpgrq qrynl ba gur erfhygf.
Ebyy n friragrra fvqrq qvr naq n svsgl guerr fvqrq qvr (fvqrf ner ynoryrq mreb gb A zvahf bar). Zhygvcyl gur svsgl-guerr fvqrq qvr erfhyg ol friragrra naq nqq gur inyhrf.
Gur erfhyg jvyy or va mreb gb bar gubhfnaq gjb. Va gur rirag bs rvgure bs gurfr rkgerzr erfhygf, ergel.
Rkcrpgrq ahzore bs qvpr ebyyf vf gjb gvzrf bar gubhfnaq guerr qvivqrq ol bar gubhfnaq bar, be gjb cbvag mreb mreb sbhe qvpr ebyyf.
Comment author:Coscott
14 February 2014 01:48:25AM
0 points
[-]
I am glad someone is thinking about it enough to fully appreciate the solution. You are suggesting taking advantage of 709*977=692693. You can do better.
You can do better than missing one part in 692693? You can't do it in one roll (not even a chance of one roll) since the dice aren't large enough to ever uniquely identify one result... is there SOME way to get it exactly? No... then it would be a multiple of 1001.
I am presently stumped. I'll think on it a bit more.
ETA: OK, instead of having ONE left over, you leave TWO over. Assuming the new pair is around the same size that nearly doubles your trouble rate, but in the event of trouble, it gives you one bit of information on the outcome. So, you can roll a single 503 sided die instead of retrying the outer procedure?
Depending on the pair of primes that produce the two-left-over, that might be better. 709 is pretty large, though.
Comment author:Leonhart
16 February 2014 07:55:44PM
7 points
[-]
Brought to mind by the recent post about dreaming on Slate Star Codex:
Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?
My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.
Comment author:Yvain
16 February 2014 11:30:46PM
*
18 points
[-]
Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
Comment author:Kaj_Sotala
19 February 2014 08:21:47AM
1 point
[-]
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
Comment author:Alejandro1
17 February 2014 01:16:31AM
0 points
[-]
Indeed, there is an essay in Brainstorms articulating this position. IIRC Dennett does not explicitly commit to defending it, rather he develops it to make the point that we do not have a privileged, first-person knowledge about our experiences. There is conceivable third-person scientific evidence that might lead us to accept this theory (even if, going by Yvain's comment, this does not seem to actually be the case), and our first-person intuition does not trump it.
Comment author:mcoram
12 February 2014 01:17:07AM
*
7 points
[-]
I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).
For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as the game goes on you can refine your guess. After 10 flips the game chooses a new coin from the urn, so you won't know so much about the coin again, but try to take account of what you do know -- it's from the same urn Anise as the last coin (iid). If you try this, tell me what your average score is on play 100, say.
There's a couple other random processes to guess in the game and also a quiz. The questions are intended to force you to guess at least some of the time. If you have suggestions for other quiz questions, send them to me by PM in the format:
Comment author:Coscott
12 February 2014 02:11:07AM
3 points
[-]
This game has taught me something. I get more enjoyment than I should out of watching a random variable go up and down, and probably should avoid gambling. :)
Comment author:Emile
12 February 2014 10:32:17PM
1 point
[-]
Nice work, congrats! Looks fun and useful, better than the calibration apps I've seen so far (including one I made, that used confidence intervals - I had a proper scoring rule too!)
My score:
Current score: 3.544 after 10 plays, for an average score per play of 0.354.
Comment author:mcoram
13 February 2014 03:23:37AM
*
0 points
[-]
Thanks Emile,
Is there anything you'd like to see added?
For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don't have a way to host this, right now, though.)
Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).
Also, I'd be happy to have pointers to your calibration apps or others you've found useful.
Comment author:Vaniver
14 February 2014 10:26:29PM
5 points
[-]
An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.
Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.
Comment author:fezziwig
12 February 2014 06:58:44PM
10 points
[-]
Live in a place with lots of demand. Silicon Valley and Boston are both good choices; there may be others but I'm less familiar with them.
Have a github account. Fill it with stuff.
Have a personal site. Fill it with stuff.
Don't worry about the degree requirements; everybody means "Bachelor's of CS or equivalent".
Don't worry about experience requirements. Unlike the degree requirement this does sometimes matter, but you won't be able to tell by reading the advert so just go ahead and apply.
Prefer smaller companies. The bigger the company, the more likely it is that your resume will be screened out by some automated process before it can reach someone like me. I read peoples' githubs; HR necessarily does not.
Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.
Comment author:Viliam_Bur
12 February 2014 09:35:12AM
*
5 points
[-]
links to your Github, websites you've built, etc.
This is just a guess, but I think it might be helpful to include some screenshots (in color) of the programs, websites, etc. That would make them "more real" to the person who reads this. At least, save them some inconvenience. Of course, I assume that the programs and websites have a nice user interface.
It's also an opportunity for an interesting experiment: randomly send 10 resumes without the screenshorts, and 10 resumes with screenshots. Measure how many interview invitations you get from each group.
If you have a certificate from Udacity or other online university, mention that, too. Don't list is as a formal education, but somewhere in the "other courses and certificates" category.
I think ideally, you want your code running on a website where they can interact with it, but maybe a screenshot would help entice them to go to the website. Or help if you can't get the code on a website for some reason.
Comment author:Viliam_Bur
14 February 2014 07:31:24PM
*
1 point
[-]
It depends on your model of who will be reading your resume.
I realized that my implicit model is some half-IT-literate HR person or manager. Someone who doesn't know what LaTeX is, and who couldn't download and compile your project from Github. But they may look at a nice printed paper and say: "oh, shiny!" and choose you instead of some other candidate.
Comment author:jkaufman
12 February 2014 10:28:09PM
*
1 point
[-]
Practicing whiteboard-style interview coding problems is very helpful. The best places to work will all make you code in the interview [1] so you want to feel at-ease in that environment. If you want to do a practice interview I'd be up for doing that and giving you an honest evaluation of whether I'd hire you if I were hiring.
[1] Be very cautious about somewhere that doesn't make you code in the interview: you might end up working with a lot of people who can't really code.
Comment author:maia
12 February 2014 08:16:25PM
1 point
[-]
If you have the skills to do software interviews well, the hardest part will be getting past resume screening. If you can, try to use personal connections to bypass that step and get interviews. Then your skills will speak for themselves.
Comment author:rxs
16 February 2014 04:25:05PM
*
4 points
[-]
Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP).
Is this even possible? Claims seem to be contradictory.
Does anybody have recommendations on systems that actually work? Most I've seen seem like overblown claims to pump for money from desperate managers... I'm willing to put into it money if it actually can deliver.
Comment author:Viliam_Bur
12 February 2014 10:47:27PM
*
4 points
[-]
A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found twoarticles. So here is a bit more info:
The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and high-school levels). He taught the method to some volunteers, who used it to teach children in Czech Rep. and Slovakia. These days the inventor of the method is dead, he started writing a book but didn't finish it, and most of the volunteers are not working in education anymore. So I was afraid the art would be lost, which would a pity. Luckily, his son finished the book, other people added their notes and experiences, and recently the method was made very popular among teachers; and in Czech Rep. the government officially suports this method (in 10% of schools). My experience with this method from my childhood (outside of the school system, in summer camps), is that it's absolutely great.
I am afraid that if I try to describe it, most of it will just sound like common sense. Examples from real life are used. Kids are encouraged to solve the problems for themselves. The teacher is just a coach or moderator; s/he helps kids discuss each other's solutions. Start with specific examples, only later move to abstract generalization of them. Let the children discover the solution; they will remember it better. In some situation specific tools are used (e.g. the basic addition and subtraction is taught by walking on a numeric axis on the floor; also see pictures here). For motivation, the specific examples are described using stories or animals or something interesting (e.g. the derivative of the function is introduced using a caterpillar climbing on hills). There is a big emphasis on keeping a good mood in the classroom.
Comment author:chaosmage
19 February 2014 11:54:04AM
0 points
[-]
This was fun. I like how he emphasizes that every kid can figure out all of math by herself, and that thinking citizens are what you need for a democracy rather than a totalitarian state - because the Czech republic was a communist dictatorship only a generation ago, and many teachers were already teachers then.
Comment author:Viliam_Bur
19 February 2014 12:27:29PM
*
1 point
[-]
A cultural detail which may help to explain this attitude:
In communist countries a carreer in science or education of math or physics was a very popular choice of smart people. It was maybe the only place where you could use your mind freely, without being afraid of contradicting something that Party said (which could ruin your career and personal life).
So there are many people here who have both "mathematics" and "democracy" as applause lights. But I'd say that after the end of communist regime the quality of math education actually decreased, because the best teachers suddenly had many new career paths available. (I was in a math-oriented high school when the regime ended, and most of the best teachers left the school within two years, and started their private companies or non-governmental organizations; usually somehow related to education.) Even the mathematical curriculum of prof. Hejný was invented during communism... but only in democracy his son has the freedom to actually publish it.
Comment author:cursed
11 February 2014 08:33:42PM
*
4 points
[-]
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
Calculus for Science and Engineering
Calculus - Spivak
Linear Algebra and its Applications - Strang
Linear Algebra Done Right
Div, Grad, Curl and All That (Vector calc)
Fundamentals of Number Theory - LeVeque
Basic Set Theory
Discrete Mathematics and its Applications
Introduction to Mathematical Logic
Abstract Algebra - Dummit
I'm well versed in simple calculus, going back to precalc to fill gaps I may have in my knowledge. I feel like I'm missing some major gaps in knowledge jumping from the undergrad to graduate level. Do any math PhDs have any advice?
Comment author:Coscott
11 February 2014 08:50:19PM
*
9 points
[-]
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.
If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.
If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.
If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.
For what it's worth, I'm doing roughly the same thing, though starting with linear algebra. At first I started with multivariable calc, but when I found it too confusing, people advised me to skip to linear algebra first and then return to MVC, and so far I've found that that's absolutely the right way to go. I'm not sure why they're usually taught the other way around; LA definitely seems more like a prereq of MVC.
I tried to read Spivak's Calc once and didn't really like it much; I'm not sure why everyone loves it. Maybe it gets better as you go along, idk.
I've been doing LA via Gilbert Strang's lectures on the MIT Open CourseWare, and so far I'm finding them thoroughly fascinating and charming. I've also been reading his book and just started Hoffman & Kunze's Linear Algebra, which supposedly has a bit more theory (which I really can't go without).
Comment author:Qiaochu_Yuan
12 February 2014 01:24:50AM
*
1 point
[-]
I think people generally agree that analysis, topology, and abstract algebra together provide a pretty solid foundation for graduate study. (Lots of interesting stuff that's accessible to undergraduates doesn't easily fall under any of these headings, e.g. combinatorics, but having a foundation in these headings will equip you to learn those things quickly.)
For analysis the standard recommendation is baby Rudin, which I find dry, but it has good exercises and it's a good filter: it'll be hard to do well in, say, math grad school if you can't get through Rudin.
For point-set topology the standard recommendation is Munkres, which I generally like. The problem I have with Munkres is that it doesn't really explain why the axioms of a topological space are what they are and not something else; if you want to know the answer to this question you should read Vickers. Go through Munkres after going through Rudin.
I don't have a ready recommendation for abstract algebra because I mostly didn't learn it from textbooks. I'm not all that satisfied with any particular abstract algebra textbooks I've found. An option which might be a little too hard but which is at least fairly comprehensive is Ash, which is also freely legally available online.
For the sake of exposure to a wide variety of topics and culture I also strongly, strongly recommend that you read the Princeton Companion. This is an amazing book; the only bad thing I have to say about it is that it didn't exist when I was a high school senior. I have other reading recommendations along these lines (less for being hardcore, more for pleasure and being exposed to interesting things) at my blog.
For analysis the standard recommendation is baby Rudin, which I find dry, but it has good exercises and it's a good filter: it'll be hard to do well in, say, math grad school if you can't get through Rudin.
I feel that it's only good as a test or for review, and otherwise a bad recommendation, made worse by its popularity (which makes its flaws harder to take seriously), and the widespread "I'm smart enough to understand it, so it works for me" satisficing attitude. Pugh's "Real Mathematical Analysis" is a better alternative for actually learning the material.
Keep a file with notes about books. Start with Spivak's "Calculus" (do most of the exercises at least in outline) and Polya's "How to Solve It", to get a feeling of how to understand a topic using proofs, a skill necessary to properly study texts that don't have exceptionally well-designed problem sets. (Courant&Robbins's "What Is Mathematics?" can warm you up if Spivak feels too dry.)
Given a good text such as Munkres's "Topology", search for anything that could be considered a prerequisite or an easier alternative first. For example, starting from Spivak's "Calculus", Munkres's "Topology" could be preceded by Strang's "Linear Algebra and Its Applications", Hubbard&Hubbard's "Vector Calculus", Pugh's "Real Mathematical Analysis", Needham's "Visual Complex Analysis", Mendelson's "Introduction to Topology" and Axler's "Linear Algebra Done Right". But then there are other great books that would help to appreciate Munkres's "Topology", such as Flegg's "From Geometry to Topology", Stillwell's "Geometry of Surfaces", Reid&Szendrői's "Geometry and Topology", Vickers's "Topology via Logic" and Armstrong's "Basic Topology", whose reading would benefit from other prerequisites (in algebra, geometry and category theory) not strictly needed for "Topology". This is a downside of a narrow focus on a few harder books: it leaves the subject dry. (See also this comment.)
Comment author:Nisan
11 February 2014 09:19:56PM
1 point
[-]
Maybe the most important thing to learn is how to prove things. Spivak's Calculus might be a good place to start learning proofs; I like that book a lot.
Comment author:iarwain1
18 February 2014 06:21:15PM
*
0 points
[-]
I'm doing precalculus now, and I've found ALEKS to be interesting and useful. For you in particular it might be useful because it tries to assess where you're up to and fill in the gaps.
I also like the Art of Problem Solving books. They're really thorough, and if you want to be very sure you have no gaps then they're definitely worth a look. Their Intermediate Algebra book, by the way, covers a lot of material normally reserved for Precalculus. The website has some assessments you can take to see what you're ready for or what's too low-level for you.
Comment author:[deleted]
17 February 2014 05:19:59PM
*
0 points
[-]
Both of the above (=Bentham's classical utilitarianism)
I mean this.
In any case, what answer do you expect?
I do not expect any specific answer.
What would constitute a valid reason?
For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.
What are the assumptions from which you want to derive this?
Comment author:shminux
11 February 2014 09:16:38PM
*
7 points
[-]
2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.
I would really like to see some solid estimates here, not just the usual hand-waving. Maybe someone better qualified can critique the following.
By "a computer program to simulate Maxwell's equations" EY presumably means a linear PDE solver for initial boundary value problems. The same general type of code should be able to handle the Schroedinger equation. There are a number of those available online, most written in Fortran or C, with the relevant code size about a megabyte. The Kolmogorov complexity of a solution produced by such a solver is probably of the same order as its code size (since the solver effectively describes the strings it generates), so, say, about 10^6 "complexity units". It might be much lower, but this is clearly the upper bound.
One wrinkle is that the initial and boundary conditions also have to be given, and the size of the relevant data heavily depends on the desired precision (you have to give the Dirichlet or Neumann boundary conditions at each point of a 3D grid, and the grid size can be 10^9 points or larger). On the other hand, the Kolmogorov complexity of this initial data set should be much lower than that, as the values for the points on the grid are generated by a piece of code usually much smaller than the main engine. So, in the first approximation, we can assume that it does not add significantly to the overall complexity.
Things get dicier if we try to estimate in a similar way the complexity of the models like General Relativity, the Navier-Stokes equations or the Quantum Field Theory, due to their non-linearity and a host of other issues. When no general-purpose solver is available, how does one estimate the complexity? Currently, a lot of heuristics are used, effectively hiding part of the algorithm in the human mind, thus making any estimate unreliable, as human mind (or "Thor's mind") is rather hard to simulate.
One can argue that the equations themselves for each of the theories are pretty compact, so the complexity cannot be that high, but then, as Feynman noted, all of physical laws can be written simply as A=0, where A hides all the gory details. We still have to specify the algorithm to generate the predictions, and that brings us back to numerical solvers.
I also cannot resist noting, yet again, that all interpretation of QM that rely on solving the Schrodinger equation have exactly the same complexity, as estimated above, and so cannot be distinguished by the Occam's razor. This applies, in particular, to MWI vs Copenhagen.
It is entirely possible that my understanding of how to calculate the Kolmogorov complexity of a physical theory is flawed, so I welcome any feedback on the matter. But no hand-waving, please.
Comment author:Squark
16 February 2014 08:20:57PM
1 point
[-]
It shouldn't be that hard to find code that solves a non-linear PDE. Google search reveals http://einsteintoolkit.org/ an open source that does numerical General Relativity.
However, QFT is not a PDE, it is a completely different object. The keyword here is lattice QFT. Google reveals this gem: http://xxx.tau.ac.il/abs/1310.7087
Nonperturbative string theory is not completely understood, however all known formulations reduce it to some sort of QFT.
Comment author:palladias
13 February 2014 04:06:12PM
*
3 points
[-]
I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").
I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.
Though rereading it, does anyone know whether Zach knows about MIRI and/or lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work, which is probably not what the scientist is worried about.
Comment author:mwengler
12 February 2014 01:08:59AM
*
3 points
[-]
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both "obviously" not conscious but be mistaken by others as conscious. Does every human on the planet who reaches "voice mail jail," voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?
I have been enjoying the television show "almost human." In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android "life" is a tiny fraction of the value on a "human" life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this "almost human" android partner is African American.
Comment author:cousin_it
12 February 2014 09:44:48AM
*
10 points
[-]
Is this going to become even a harder distinction to make as tech continues to get better?
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.
Comment author:mwengler
12 February 2014 06:52:08PM
0 points
[-]
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can't really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Also this scenario reopens the question of whether uploads are conscious in the first place!
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say "not so far as I can tell?"
The p-zombie assumption is that it would say "yeah I'm conscious duhh what kind of question is that?" But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an "honest" answer from your sim. If all such sims answers "yes," you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying "no, don't think so." and conclude that whatever it is in us that makes us function as conscious we hadn't found that thing yet and put it in our list of requirements.
Comment author:crazy88
12 February 2014 11:30:11PM
2 points
[-]
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
I haven't thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn't seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it's a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn't show that consciousness can't play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn't mean that consciousness can't cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
Comment author:mwengler
14 February 2014 08:37:36PM
*
0 points
[-]
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
Comment author:cousin_it
14 February 2014 02:09:28PM
*
1 point
[-]
A few points:
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don't know if even lossless compression of the whole input-output map is going to preserve everything. Let's say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn't contain many interesting statements about consciousness, but that doesn't mean you're allowed to compress away consciousness. And even on longer timescales, people don't seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can't say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88's point that consciousness might not be the smallest possible program that can fill that role.
4) I'm not sure that consciousness is just about the input-output map. Doesn't it feel more like internal processing? I seem to have consciousness even when I'm not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
Comment author:ChristianKl
14 February 2014 12:58:00AM
1 point
[-]
It depends on whether you subscribe to materialism. If you do then there nothing to measure. Conscious might even be a tricky illusion as Dennett suggests.
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
Comment author:mwengler
14 February 2014 08:33:11PM
0 points
[-]
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
OMG then someone should get busy! Tell me what I can measure and if it makes any kind of sense I will start working on it!
Comment author:ChristianKl
15 February 2014 02:36:47AM
*
0 points
[-]
I do have a qualia for perceiving whether someone else is present in a meditation or is absent minded. It could be that it's some mental reactions that picks up microgestures or some other thing that I don't consciously perceive and summarizes that information into a qualia for mental presence.
Investigating how such a qualia works is what I would do personally when I would want to investigate consciousness.
But you probably have no such qualia, so you either need someone who has or develop it yourself. In both cases that probably means seeking a good meditation teacher.
It's a difficult subject to talk about in a medium like this where people who are into a spiritual framework that has some model of what conscious happens to be have phenomenological primitives that the audience I'm addressing doesn't have. In my experience most of the people who I consider capable in that regard are very unwilling to talk about details with people who don't have phenomenological primitives to make sense of them. Instead of answering a question directly a Zen teacher might give you a koan and tell you to come back in a month when you build the phenomenological primitives to understand it, expect that he doesn't tell you about phenomenological primitives.
Comment author:Lumifer
14 February 2014 05:17:27PM
4 points
[-]
An interesting quote, I wonder what people here will make of it...
True rationalists are as rare in life as actual deconstructionists are in university English departments, or true bisexuals in gay bars. In a lifetime spent in hotbeds of secularism, I have known perhaps two thoroughgoing rationalists—people who actually tried to eliminate intuition and navigate life by reasoning about it—and countless humanists, in Comte’s sense, people who don’t go in for God but are enthusiasts for transcendent meaning, for sacred pantheons and private chapels. They have some syncretic mixture of rituals: they polish menorahs or decorate Christmas trees, meditate upon the great beyond, say a silent prayer, light candles to the darkness.
I can't tell if the author means "rationalists" in the technical sense (i.e. as opposed to empiricists) but if he doesn't then I think it's unfair of him to require that rationalists "eliminate intuition and navigate life by reasoning about it", since this is so clearly irrational (because intuition is so indispensably powerful).
Comment author:listic
13 February 2014 12:39:08AM
*
2 points
[-]
I am going to organize a coaching course to learn Javascript + Node.js.
My particular technology of choice is node.js because:
If starting from scratch, having to learn just one language for both frontend and backend makes sense. Javascript is the only language you can use in a browser and you will have to learn it anyway. They say it's kind of Lisp or Scheme in disguise and a pretty cool language by itself.
Node.js is a modern asynchronous web framework, made by running Javascript code server-side on Google's open-source V8 JavaScript Engine It seems to be well suited for building highly-loaded backend servers, and works for regular websites, too.
Hack Reactor teaches it to make 98% of graduates earn $110k/year, on average, after 3 months of study. But their tuition is $17,780. We will do much cheaper.
I wanted to learn modern web technologies for a while, but haven't gotten myself to actually do it. When I tried to start learning, I was overwhelmed by the number of things I still have to learn to get anything done. Here's the bare minimum:
html
css
javascript
node.js
git
I believe the optimum course of action is to hire a guru to do coaching for me and several other students and split the cost. The benefits compared to learning by yourself are:
personal communication (via Skype or similar) and doing tasks along with the others provides an additional drive to complete your studies
guru can choose an optimum path for me to reach the desired capabilities in shortest time.
The capabilities that I want to achieve are:
i. To be able to add functionality to my Tumblr blog (where I run a writing prompt) by either using custom theme + Tumblr API or extracting posts via API and using them to render my blog on a separate website. node.js is definitely not needed here, rather than this is the simplest case of doing something useful that I need to with web technologies and node.js is my web technology of choice.
ii. To hack on Undum, a client-side hypertext interactive fiction framework. My thoughts on why I think Undum and IF are cool are here.
To port features from one version of Undum to another and create a version of Undum that is able to run all existing games (about 5 of them)
To abstract away Undum's internal game representation and state so that they can be loaded and saved externally, over a network
To create a server part for Undum that controls the version of the book you're allowed to read (allows to read one new chapter a day, remembers the branch you're reading, up to the end, if you've read to the end, etc.)
To create a website that works as a YouTube and an editor for Undum games
iii. To create new experiments that utilize modern web technologies to interesting and novel effect. I know that this sounds really vague, but the point is that sometimes you never know what can be done until you learn the relevant skills. One example of the kind of thing that I think about is what this paper is talking about:
Comment author:Emile
13 February 2014 08:50:13AM
0 points
[-]
I would suggest using AngularJs instead, since it can be purely client-side code, you don't need to deal with anything server-side.
There are also some nice online development environments like codenvy that can provide a pretty rich environment and I belieave have some collaburative features too (instead of using dropbox, doodle and slideshare, maybe).
If all those technologies seem intimidating, some strategies:
Focus on a subset, i.e. only html and css
Use Anki a lot - I've used anki to put in git commands, AngularJS concepts and CSS tricks so that even if I wasn't actively working on a project using those, they'd stay at the back of my mind.
EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?
Bitcoin economy and a possible violation of the efficent market hypososis.
With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per annum. Depositors are not actually trading themselves, so the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked. I Gwern estimates that a Bitcoin exchange has a 1% chance of failure per month based upon past performance, but that was written some time ago, and the increased legal recognition of Bitcoin plus people learning from mistakes should decrease this probability. On the other hand the biggest exchange MtGox froze withdrawals a few days ago, but note that they claim that this is a temporary technical fault. As additional information, Bitfinex's website states "The company is incorporated in Hong Kong as a Limited Liability Corporation.", which would seem to decrease the likelihood of the company stealing the money.
In conclusion, even assuming a pessimistic 1% chance of failure per month I reach a conservative estimate of 65% APR expected returns (assuming that the interest is constant at the lower 0.17% figure) .
So why aren't people flocking to the website, starting a bidding war to drive the interest rate down to a tenth of its current value? Unless there is something wrong with my previous calculations, the best explanation I can think of is that it simply has not generated enough publicity. Perhaps also everyone in the Bitcoin community is assuming the price is going to increase by 10000%, or they are looking for the next big altcoin, or they are daytrading, but either way a boring but safe option doesn't seem so interesting. In conclusion, this seems to be an example where the efficent market hypothosis does not hold, due to insufficent propagation of information.
Disclaimers: I don't have shares in Bitfinex, and I hope this doesn't look like spam. This is a theoretical discussion of the EMH, not finanal advice, and if you lose your money I am not responsible. I'm not sure whether this deserves its own post outside of discussion – please let me know.
Comment author:Lumifer
12 February 2014 04:10:37PM
2 points
[-]
the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked.
The exchange can just fail in a large variety of ways and close (go bankrupt). If you're not "insured" you are exposed to the trading risk and insurance costs what, about 30%? and, of course, it doesn't help you with the exchange counterparty risk.
Comment author:niceguyanon
12 February 2014 08:38:13PM
1 point
[-]
Depositors are not actually trading themselves, so the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked.
There is risk that is baked in from the fact that depositors are on the hook if trades can not be unwound quickly enough, and because this is Bitcoins, where volatility is crazy there is even more of this risk.
For example assume you lend money for some trader to go long, and now say that suddenly prices drop so quickly that it puts the trader beyond a margin call, in fact it puts him at liquidation, oh uh...the traders margin wallet is now depleted, who makes up the balance, the lenders. They actually do mention this on their website. But they don't tell you what the margin call policy is. This is a really important part of the risk. If they allow a trader to only put up $50 of a $100 position and call you in when your portion hits 25% that would be normal for something like index equities but pretty insane for something like Bitcoin.
Comment author:EGarrett
12 February 2014 04:52:46AM
2 points
[-]
How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain things, even when you get away with them.
Do you think your conscience would cease to bother you if you could know for a fact that there were no other living creatures feeling pain around you? In what other cases does a true solipsistic world make your behavior distinct from a non-solipsistic one?
Comment author:mwengler
12 February 2014 04:01:07PM
*
6 points
[-]
I'm certainly comfortable with violent fantasy when the roles are acted out. This suggests to me that if I were convinced that certain person-seeming things were not alive, conscious, were not what they seemed that this might tip me in to some violent behaviors. I think at minimum I would experiment with it, try a slap here, a punch there. And where I went from there would depend on how it felt I suppose.
Also I would almost certainly steal more stuff if I was convinced that everything was landscape.
Comment author:MrMind
12 February 2014 10:32:35AM
1 point
[-]
noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
When I was younger and studying analytical philosophy, I noticed the same thing. Unless solipsism morphs into apathy, there are still 'representations' you can't control and that you can care about. Unless it alters your values, there should be no difference in behaviour too.
Comment author:DanielLC
16 February 2014 07:11:50AM
0 points
[-]
If I didn't care about other people, I wouldn't worry about donating to charities that actually help people. I'd donate a little to charities that make me look good, and if I'm feeling guilty and distracting myself doesn't seem to be cost-effective, I'd donate to charities that make me feel good. I would still keep quite a bit of my money for myself, or at least work less.
As it is, I've figured that other people matter, and some of them are a lot cheaper to make happy than me, so I decided that I'm going to donate pretty much everything I can to the best charity I can find.
Comment author:fluchess
12 February 2014 03:03:45AM
2 points
[-]
I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability
Option 1: $4 definitely
Option 2: $6 or $3
Option 3: $8 or $2
Option 4: $10 or $1
Option 5: $12 or $0
I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?
Comment author:Coscott
12 February 2014 03:14:32AM
13 points
[-]
Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.
Comment author:jkrause
12 February 2014 07:16:37AM
9 points
[-]
Here's one interesting way of viewing it that I once read:
Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).
Comment author:EGarrett
12 February 2014 04:46:07AM
3 points
[-]
As a poker player, the idea we always batted back and forth was that Expected Value doesn't change over shorter sample sizes, including a single trial. However you may have a risk of ruin or some external factor (like if you're poor and given the option of being handed $1,000,000 or flipping a coin to win $2,000,001).
Barring that, if you're only interested in maximizing your result, you should follow EV. Even in a single trial.
Comment author:Dagon
12 February 2014 09:01:53AM
1 point
[-]
Clearly option 5 has the higest mean outcome. If you value money linearly (that is, $12 is exactly 3 times as good as $4, and there's no special utility threshold along the way (or disutility at $0), it's the best option.
For larger values, your value for money may be nonlinear (meaning: the difference between $0 and $50k may be much much larger than the difference between $500k and $550k to your happiness), and then you'll need to convert the payouts to subjective value before doing the calculation. Likewise if you're in a special circumstance where there's a threshold value that has special value to you - if you need $3 for bus fare home, then option 1 or 2 become much more attractive.
Comment author:DanielLC
16 February 2014 07:15:20AM
0 points
[-]
That depends on the amount of background money and randomness you have.
Although I can't really see any case where I wouldn't pick option five. Even if that's all the money I will ever have, my lifespan, and by extension my happiness, will be approximately linear with time.
If you specify that I get that much money each day for the rest of my life, and that's all I get, then I'd go for something lower risk.
Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?
Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.
So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).
A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.
What leads you to believe that you disagree with Eliezer on this point? I suspect that you are just going by the title. I just read the essay and he endorses lots of practices that others call Devil's Advocacy. I'm really not sure what practice he is condemning. If you can identify a specific practice that you disagree with him about, could you describe it in your own words?
I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:
Islamic Spain has publicly declared half of Europe to be dar al Harb, liable to attack at any time, while quietly seeking the return of its Caribbean colonies by diplomatic means.
The Christian powers of Europe discuss the partition of Greece-across-the-sea, the much-decayed final remnant of the Roman Empire, which nonetheless rules eastern Africa from the Nile Delta to Lake Tanganyika.
United India jostles with China for supremacy in Asia, both courting the lesser powers of Sind and the Mongol Khanate as allies in their struggle. The Malayan Sultanate, the world's foremost naval power, keeps its vast fleet as the balancing weight in these scales, supporting now one, now another as the advantage shifts - while keeping a wary eye on the West, looking for a European challenge to its Pacific hegemony.
The Elbe, marking the border of the minor powers France-Allemagne and Bavaria, remains a flashpoint for Great-Power rivalries, as it has been for centuries. The diplomatic balance is once again shifting, with France-Allemagne opportunistically seeking support from Bavaria's historic protector Spain, Scandinavia eyeing the Baltic ports of both sides, and Russia seemingly distracted by imperial concerns in Asia.
An enormous darkness shrouds the South American continent; where the ancient Inca kingdom has extended its rule, and its human sacrifices, from the Tierra del Fuego to the Rio Grande. Only a few Amazonian tribes, protected by the jungle canopy, maintain a precarious independence; and the Jaguar Knights are ever in search of new conquests to feed their gods. The oceans have protected Europe, and distance and desert North America; but an age of steam ships and iron horses dawns, and the globe shrinks. Beplumed cavalry may yet ride in triumph through the streets of London, and obsidian knives flash atop the Great Pyramid.
Several nations are available to play:
Sind, an important regional power, occupying roughly the area of Pakistan, Afghanistan, and parts of Iran. Contend with India for the rule of the subcontinent!
Najd, likewise a significant factor in the power-balance of both Asia and Europe, taking up most of the Middle East. Fight Russia for Anatolia, Greece for Africa, or ally with India to partition Sind!
The Khanate, a landlocked power stretching from the Urals to very nearly the Pacific - but not quite, courtesy of the Korean War. Reverse the outcome and bring a new Mandate to rule China!
Greece-in-exile, least among the powers that bestride the Earth - that is, not counting the various city-states, vassals, and half-independent border marches that some Great Powers find it convenient to maintain. Take on usurping Italia Renata, bullying Russia and infidel Spain, and restore the glory that was Rome!
Additionally, playing in an MP campaign offers all sorts of opportunities for sharpening your writing skills through stories set in the alternate history!
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly negative, mired in what I think are irrational kneejerk reactions to defend "what it means to be human." So I'm just like, you know what? Fuck it. You can't even help yourselves help yourselves. Forget it.
Comment author:jaibot
12 February 2014 06:39:38AM
22 points
[-]
You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?
A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.
Also, would you still want to save a drowning dog even if it might bite you out of fear and misunderstanding? (let's say it is a small dog and a bite would not be drastically injurious)
Comment author:Viliam_Bur
12 February 2014 09:50:30AM
4 points
[-]
If you think helping humanity is (in long term) a futile effort, because humans are so stupid they will destroy themselves anyway... I'd say the organization you are looking for is CFAR.
So, how would you feel about making a lot of money and donating to CFAR? (Or other organization with a similar mission.)
Comment author:Slackson
12 February 2014 03:25:44AM
2 points
[-]
I can't speak for you, but I would hugely prefer for humanity to not wipe itself out, and even if it seems relatively likely at times, I still think it's worth the effort to prevent it.
If you think existential risks are a higher priority than parasite removal, maybe you should focus your efforts on those instead.
Comment author:mwengler
12 February 2014 04:28:22PM
*
4 points
[-]
I think it is amazingly myopic to look at the only species that has ever started a fire or crafted a wheel and conclude that
humans are so facepalmingly bad at making decisions
The idea that climate change is an existential risk seems wacky to me. It is not difficult to walk away from an ocean which is rising at even 1 m a year and no one hypothesizes anything close to that rate. We are adapted to a broad range of climates and able to move north south east and west as the winds might blow us.
Running out of fossil fuels, thinking we are doing something wildly stupid with our use of fossil fuels seems to me to be about as sensible as thinking a centrally planned economy will work better. It is not intuitive that a centrally planned economy will be a piece of crap compared to what we have, but it turns out to be true. Thinking you or even a bunch of people like you with no track record doing ANYTHING can second guess the markets in fossil fuels, well it seems intuitively right but if you ever get involved in testing your intuitions I don't think you'll find out it holds up. And if you think even doubling the price of fossil fuels really changes the calculus by much, I think Europe and Japan have lived that life for decades compared to the US, and yet the US is the home to the wackiest and ill-thought-out alternatives to fossil fuels in the world.
Can anybody explain to me why creating a wildly popular luxury car which effectively runs on burning coal is such a boon to the environment that it should be subsidized at $7500 by the US federal government and an additional $2500 by states such as California which has been so close to bankruptcy recently? Well that is what a Tesla is, if you drive one in a country with coal on the grid, and most of Europe, China, and the US are in that category, The Tesla S Performance puts out the same amount of carbon as a car getting (WRONG14WRONG) 25 mpg of gasoline.
Comment author:drethelin
12 February 2014 06:12:26PM
-1 points
[-]
It's not difficult to walk away from an ocean? Please explain New Orleans.
Tesla (and other stuff getting power from the grid) currently run mostly on coal but ideally they can be run off (unrealistically) solar or wind or (realistically) nuclear.
Comment author:mwengler
12 February 2014 06:58:28PM
2 points
[-]
<i>It's not difficult to walk away from an ocean? Please explain New Orleans.</i>
Are you under the impression that climate change rise in ocean level will look like a dike breaking? All references to sea levels rising are reported at less than 1 cm a year, but lets say that rises 100 fold to 1 m/yr. New Orleans flooded a few meters in at most a few days, about 1 m/day.
A factor of 365 in rate could well be the subtle difference between finding yourself on the roof of a house and finding yourself living in a house a few miles inland.
The rest of your point seems to hold, though; if the subsidy is predicated on reducing CO2 emissions then the equivalent of 25mpg still isn't anything to brag about.
Comment author:Nornagest
12 February 2014 06:02:40PM
*
1 point
[-]
works out to around 80 lb CO2 generated
This is likely an overestimation, since it assumes that you're exclusively burning coal. Electricity production in the US is about 68% fossil, the rest deriving from a mixture of nuclear and renewables; the fossil-fuel category also includes natural gas, which per your link generates about 55-60% the CO2 of coal per unit electricity. This varies quite a bit state to state, though, from almost exclusively fossil (West Virginia; Delaware; Utah) to almost exclusively nuclear (Vermont) or renewable (Washington; Idaho).
Based on the same figures and breaking it down by the national average of coal, natural gas, and nuclear and renewables, I'm getting a figure of 43 lb CO2 / 100 mi, or about 50 mpg equivalent. Since its subsidies came up, California burns almost no coal but gets a bit more than 60% of its energy from natural gas; its equivalent would be about 28 lb CO2.
Comment author:DanielLC
16 February 2014 07:21:23AM
0 points
[-]
If you're looking for ways to eliminate existential risk, then knowing that humanity is about to kill itself no matter what you do and you're just putting it off a few years instead of a few billion matters. If you're just looking for ways to help individuals, it's pretty irrelevant. I guess it means that what matters is what happens now, instead of the flow through effects after a billion years, but it's still a big effect.
If you're suggesting that the life of the average human isn't worth living, then saving lives might not be a good idea, but there are still ways to help keep the population low.
Besides, if humanity was great at helping itself, then why would we need you? It is precisely the fact that we allow extreme inequality to exist that means that you can make a big difference.
Comment author:ChristianKl
14 February 2014 12:42:27AM
0 points
[-]
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this
I think you underrate the existential risks that come along with substantial genetic or neurological enhancements. I'm not saying we shouldn't go there but it's no easy subject matter. It requires a lot of thought to address it in a way that doesn't produce more problems than it solves.
For example the toolkit that you need for genetic engineering can also be used to create artificial pandemics which happen to be the existential risk most feared by people in the last LW surveys.
When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years. The sun doesn't shine the whole day so there's still further work to be done, but it doesn't seem like an insurmountable challenge.
I think you underrate the existential risks that come along with substantial genetic or neurological enhancements.
It's true, I absolutely do. It irritates me. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a "supervirus" or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons. But steering towards a possibly better humanity (or other sentient species) just seems worth the risk to me when the alternative is remaining the violent apes we are. (I know we're hominds, not apes; it's just a figure of speech.)
When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years.
That's certainly a reassuring statistic, but a less reassuring one is that solar power currently supplies less than one percent of global energy usage!! Changing that (and especially changing that quickly) will be an ENORMOUS undertaking, and there are many disheartening roadblocks in the way (utility companies, lack of government will, etc.). The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.
Comment author:ChristianKl
14 February 2014 11:59:03AM
2 points
[-]
. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a "supervirus" or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons.
Of course the ethics are obvious. The road to hell is paved with good intentions. 200 years ago burning all those fossil fuels to power steam engines sounded like a really great idea.
If you simply try to solve problems created by people adopting technology by throwing more technology at it, that's dangerous.
The wise way is to understand the problem you are facing and do specific intervention that you believe to help. CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.
CFAR style rationality training might seem less technological to you. That's actually a good thing because it makes it easier to understand the effects.
The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.
It depends on what issue you want to address. Given how things are going technology involves in a way where I don't think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.
On the other hand we don't want to turn that coal. I want to eat tuna that's not full of mercury and there already a recommendation from the European Food Safety Authority against eating tuna every day because there so much mercury in it. I want less people getting killed via fossil fuel emissions. I also want to have less greenhouse gases in the atmosphere.
is still incredibly expensive.
If you want to do policy that pays off in 50 years looking at how things are at the moment narrows your field of vision too much.
If solar continues it's price development and is 1/8 as cheap in 21 years you won't need government subsidies to get people to prefer solar over coal. With another 30 years of deployment we might not burn any coal in 50 years.
disheartening roadblocks in the way (utility companies, lack of government will, etc.).
If you think lack of government will or utility companies are the core problem, why focus on changing human neurology? Addressing politics directly is more straightforward.
When it comes to solar power it might also be that nobody will use any solar panels in 50 years because Craig Venter's algae are just a better energy source. Betting to much on single cards is never good.
CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.
It's a start, and potentially fewer side effects is always good, but think of it this way: who's going to gravitate towards rationality training? I would bet people who are already more rational than not (because it's irrational not to want to be more rational). Since participants are self-selected, a massive part of the population isn't going to bother with that stuff. There are similar issues with genetic and neurological modifications (e.g. they'll be expensive, at least initially, and therefore restricted to a small pool of wealthy people), but given the advantages over things like CFAR I've already mentioned, it seems like it'd be worth it...
I have another issue with CFAR in particular that I'm reluctant to mention here for fear of causing a shit-storm, but since it's buried in this thread, hopefully it'll be okay. Admittedly, I only looked at their website rather than actually attending a workshop, but it seems kind of creepy and culty--rather reminiscent of Landmark, for reasons not the least of which is the fact that it's ludicrously, prohibitively expensive (yes, I know they have "fellowships," but surely not that many. And you have to use and pay for their lodgings? wtf?). It's suggestive of mind control in the brainwashing sense rather than rationality. (Frankly, I find that this forum can get that way too, complete with shaming thought-stopping techniques (e.g. "That's irrational!"). Do you (or anyone else) have any evidence to the contrary? (I know this is a little off-topic from my question -- I could potentially create a workshop that I don't find culty -- but since CFAR is currently what's out there, I figure it's relevant enough.)
Given how things are going technology involves in a way where I don't think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.
You could be right, but I think that's rather optimistic. This blog post speaks to the problems behind this argument pretty well, I think. Its basic gist is that the amount of energy it will take to build sufficient renewable energy systems demands sacrificing a portion of the economy as is, to a point that no politician (let alone the free market) is going to support.
This brings me to your next point about addressing politics instead of neurology. Have you ever tried to get anything changed politically...? I've been involved in a couple of movements, and my god is it discouraging. You may as well try to knock a brick wall down with a feather. It basically seems that humanity is just going to be the way it is until it is changed on a fundamental level. Yes, I know society has changed in many ways already, but there are many undesirable traits that seem pretty constant, particularly war and inequality.
As for solar as opposed to other technologies, I am a bit torn as to whether it might be better to work on developing technologies rather than whatever seems most practical now. Fusion, for instance, if it's actually possible, would be incredible. I guess I feel that working on whatever's practical now is better for me, personally, to expend energy on since everything else is so speculative. Sort of like triage.
Comment author:palladias
11 February 2014 08:22:14PM
4 points
[-]
I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.
Comment author:fubarobfusco
12 February 2014 02:57:11AM
3 points
[-]
This is a tangent, but since you mention the "good founders started [programming] at 13" meme, it's a little bit relevant ...
I find it deeply bizarre that there's this idea today among some programmers that if you didn't start programming in your early teens, you will never be good at programming. Why is this so bizarre? Because until very recently, there was no such thing as a programmer who started at a young age; and yet there were people who became good at programming.
Prior to the 1980s, most people who ended up as programmers didn't have access to a computer until university, often not until graduate school. Even for university students, relatively unfettered access to a computer was an unusual exception, found only in extremely hacker-friendly cultures such as MIT.
Put another way: Donald Knuth probably didn't use a computer until he was around 20. John McCarthy was born in 1927 and probably couldn't have come near a computer until he was a professor, in his mid-20s. (And of course Alan Turing, Jack Good, or John von Neumann couldn't have grown up with computers!)
(But all of them were mathematicians, and several of them physicists. Knuth, for one, was also a puzzle aficionado and a musician from his early years — two intellectual pursuits often believed to correlate with programming ability.)
In any event, it should be evident from the historical record that people who didn't see a computer until adulthood could still become extremely proficient programmers and computer scientists.
I've heard some people defend the "you can't be good unless you started early" meme by comparison with language acquisition. Humans generally can't gain native-level fluency in a language unless they are exposed to it as young children. But language acquisition is a very specific developmental process that has evolved over thousands of generations, and occurs in a developmentally-critical period of very early childhood. Programming hasn't been around that long, and there's no reason to believe that a critical developmental period in early adolescence could have come into existence in the last few human generations.
So as far as I can tell, we should really treat the idea that you have to start early to become a good programmer as a defensive and prejudicial myth, a bit of tribal lore arising in a recent (and powerful) subculture — which has the effect of excluding and driving off people who would be perfectly capable of learning to code, but who are not members of that subculture.
Comment author:Viliam_Bur
12 February 2014 09:21:12AM
*
6 points
[-]
Seems to me that using computers since your childhood is not necessary, but there is something which is necessary, and which is likely to be expressed in childhood as an interest in computer programming. And, as you mentioned, in the absence of computers, this something is likely to be expressed as an interest in mathematics or physics.
So the correct model is not "early programming causes great programmers", but rather "X causes great programmers, and X causes early programming; therefore early programming correlates with great programmers".
Starting early with programming is not strictly necessary... but these days when computers are almost everywhere and they are relatively cheap, not expressing any interest in programming during one's childhood is an evidence this person is probably not meant to be a good programmer. (The only question is how strong this evidence is.)
Comparing with language acquisition is wrong... unless the comparison is true for mathematics. (Is there a research on this?) Again, the model "you need programming acquisition as a child" would be wrong, but the model "you need math acquisition as a child, and without this you later will not grok programming" might be correct.
Comment author:Pfft
12 February 2014 11:00:32PM
0 points
[-]
the correct model is not "early programming causes great programmers", but rather "X causes great programmers, and X causes early programming; therefore early programming correlates with great programmers".
Yeah, I think this is explicitly the claim Paul Graham made, with X = "deep interest in technology".
The problem with that is I think, at least with technology companies, the people who are really good technology founders have a genuine deep interest in technology. In fact, I've heard startups say that they did not like to hire people who had only started programming when they became CS majors in college. If someone was going to be really good at programming they would have found it on their own. Then if you go look at the bios of successful founders this is invariably the case, they were all hacking on computers at age 13.
Humans generally can't gain native-level fluency in a language unless they are exposed to it as young children.
The only aspect of language with a critical period is accent. Adults commonly achieve fluency. In fact, adults learn a second language faster than children.
Comment author:Creutzer
12 February 2014 05:20:01PM
*
0 points
[-]
As far as I know, the degree to which second-language speakers can acquire native-like competence in domains other than phonetics is somewhat debated. Anecdotally, it's a rare person who manages to never make a syntactic error that a native speaker wouldn't make, and there are some aspects of language (I'm told that subjunctive in French and aspect in Slavic languages may be examples) that may be impossible to fully acquire for non-native speakers.
So I wouldn't accept this theoretical assertion without further evidence; and for all practical purposes, the claim that you have to learn a language as a child in order to become perfect (in the sense of native-like) with it is true.
Comment author:Emile
12 February 2014 10:13:54PM
1 point
[-]
Not my downvotes, but you're probably getting flak for just asserting stuff and then demanding evidence for the opposing side. A more mellow approach like "huh that's funny I've always heard the opposite" would be better received.
Comment author:Creutzer
12 February 2014 11:23:21PM
*
1 point
[-]
Indeed, I probably expressed myself quite badly, because I don't think what I meant to say is that outrageous: I heard the opposite, and anecdotally, it seems right - so I would have liked to see the (non-anecdotal) evidence against it. Perhaps I phrased it a bit harshly because what I was responding to was also just an unsubstantiated assertion (or, alternatively, a non-sequitur in that it dropped the "native-like" before fluency).
Comment author:Lumifer
12 February 2014 05:26:33PM
1 point
[-]
As far as I know, the degree to which second-language speakers can acquire native-like competence in domains other than phonetics is somewhat debated.
Links? As far as I know it's not debated.
there are some aspects of language (I'm told that subjunctive in French and aspect in Slavic languages may be examples) that may be impossible to fully acquire for non-native speakers.
That's, ahem, bullshit. Why in the world would some features of syntax be "impossible to fully acquire"?
for all practical purposes, the claim that you have to learn a language as a child in order to become perfect (in the sense of native-like) with it is true.
Comment author:Creutzer
12 February 2014 11:21:24PM
3 points
[-]
You may easily know more about this issue than me, because I haven't actually researched this.
That said, let's be more precise. If we're talking about mere fluency, there is, of course, no question.
But if we're talking about actually native-equivalent competence and performance, I have severe doubts that this is even regularly achieved. How many L2 speakers of English do you know who never, ever pick an unnatural choice from among the myriad of different ways in which the future can be expressed in English? This is something that is completely effortless for native speakers, but very hard for L2 speakers.
The people I know who are candidates for that level of proficiency in an L2 are at the upper end of the intelligence spectrum, and I also know a non-dumb person who has lived in a German-speaking country for decades and still uses wrong plural formations. Hell, there's people who are employed and teach at MIT and so are presumably non-dumb who say things like "how it sounds like".
The two things I mentioned are semantic/pragmatic, not syntactic. I know there is a study that shows L2 learners don't have much of a problem with the morphosyntax of Russian aspect, and that doesn't surprise me very much. I don't know and didn't find any work that tried to test native-like performance on the semantic and pragmatic level.
I'm not sure how to answer the "why" question. Why should there be a critical period for anything? ... Intuitively, I find that semantics/pragmatics, having to do with categorisation, is a better candidate for something critical-period-like than pure (morpho)syntax. I'm not even sure you need critical periods for everything, anyway. If A learns to play the piano starting at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on average, but almost always, better at age 25 than B is at 55. Unfortunately, that's basically impossible to study while controlling for all confounders like general intelligence, quality of instruction, and number of hours spent on practice. (The piano example would be analogous more to the performance than the competence aspect of language, I suppose.)
There is a study about Russian dative subjects that suggests even highly advanced L2 speakers with lots of exposure don't get things quite right. Admittedly, you can still complain that they don't separate the people who have lived in a Russian-speaking country for only a couple of months from those who have lived there for a decade.
The thing about the subjunctive is, at best, wrong, but certainly not bullshit. The fact that it was told to me by a very intelligent French linguist about a friend of his whose L2-French is flawless except for occasional errors in that domain is better evidence for that being a very hard thing to acquire than your "bullshit" is against that.
Comment author:Lumifer
13 February 2014 01:14:51AM
*
-1 points
[-]
How many L2 speakers of English do you know who never, ever pick an unnatural choice from among the myriad of different ways in which the future can be expressed in English?
You are committing the nirvana fallacy. How many native speakers of English never make mistakes or never "pick an unnatural choice"?
For example, I know a woman who immigrated to the US as an adult and is fully bilingual. As an objective measure, I think she had the perfect score on the verbal section of the LSAT. She speaks better English than most "natives". She is not unusual.
The fact that it was told to me by a very intelligent French linguist about a friend of his whose L2-French is flawless except for occasional errors in that domain
Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make mistakes?
Comment author:Creutzer
13 February 2014 01:31:05AM
*
0 points
[-]
How many native speakers of English never make mistakes or never "pick an unnatural choice"?
I'm not talking about performance errors in general. I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.
She speaks better English than most "natives".
Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks higher-status English than most native speakers. Besides, the fact that she gets perfect scores on some test (whose content and format is unknown to me), which presumably native speakers don't, suggests that she is far from an average individual anyway.
Also, that you're not bringing up a single relevant study that compares long-time L2 speakers with native speakers on some interesting, intricate and subtle issue where a competence difference might be suspected leaves me with a very low expectation of the fruitfulness of this discussion, so maybe we should just leave it at that. I'm not even sure to what extent we aren't simply talking past each other because we have different ideas about what native-like performance means.
Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make syntax errors?
They don't, by definition; not the way you probably mean it. I wouldn't know why the rate of performance errors should correlate in any way with education (controlling for intelligence). I also trust the man's judgment enough to assume that he was talking about a sort of error that stuck out because a native speaker wouldn't make it.
Comment author:Lumifer
13 February 2014 01:45:26AM
*
1 point
[-]
I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.
I don't think so. This looks like an empirical question -- what do you mean by "extremely hard"? Any evidence?
Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks higher-status English than most native speakers.
No, I still don't think so -- for either of your claims. Leaving aside my credibility, non-black English in the United States (as opposed to the UK) has few ways to show status and they tend to be regional, anyway. She speaks better English (with some accent, to be sure) in the usual sense -- she has a rich vocabulary and doesn't make many mistakes.
she is far from an average individual anyway.
While that is true, your claims weren't about averages. Your claims were about impossibility -- for anyone. An average person isn't successful at anything, including second languages.
Comment author:Creutzer
13 February 2014 09:05:19PM
*
0 points
[-]
I don't think so. This looks like an empirical question -- what do you mean by "extremely hard"? Any evidence?
I don't know if anybody has ever studied this - I would be surprised if they had -, so I have only anecdotal evidence from the uncertainty I myself experience sometimes when choosing between "will", "going to", plain present, "will + progressive", and present progressive, and from the testimony of other highly advanced L2 speakers I've talked to who feel the same way - while native speakers are usually not even aware that there is an issue here.
She speaks better English (with some accent, to be sure) in the usual sense -- she has a rich vocabulary and doesn't make many mistakes.
How exactly is "rich vocabulary" not high-status? (Also, are you sure it actually contains more non-technical lexemes and not just higher-status lexemes?) I'm not exactly sure what you mean by "mistakes". Things that are ungrammatical in your idiolect of English?
While that is true, your claims weren't about averages. Your claims were about impossibility -- for anyone. An average person isn't successful at anything, including second languages.
I actually made two claims. The one was that it's not entirely clear that there aren't any such in-principle impossibilities, though I admit that the case for them isn't very strong. I will be very happy if you give me a reference surveying some research on this and saying that the empirical side is really settled and the linguists who still go on telling their students that it isn't are just not up-to-date.
The second is that in any case, only the most exceptional L2 learners can in practice expect to ever achieve native-like fluency.
Comment author:Viliam_Bur
13 February 2014 11:34:31AM
*
1 point
[-]
There is a study about Russian dative subjects that suggests even highly advanced L2 speakers with lots of exposure don't get things quite right.
Bonus points for giving a specific example, which helped me to understand your point, and at this moment I fully agree with you. Because I understand the example; my own language has something similar, and wouldn't expect a stranger to use this correctly. The reason is that it would be too much work to learn properly, for too little benefit. It's a different way to say things, and you only achieve a small difference in meaning. And even if you asked a non-linguist native, they would probably find it difficult to explain the difference properly. So you have little chance to learn it right, and also little motivation to do.
Here is my attempt to explain the examples from the link, pages 3 and 4. (I am not a Russian language speaker, but my native language is also Slavic, and I learned Russian. If I got something wrong, please correct me.)
That's pretty much the same meaning, it's just that the first variant is "more agenty", and the second variant is "less agenty", to use the LW lingo. But that's kinda difficult to explain explicitly, becase... you know, how exactly can "hearing" (not active listening, just hearing) be "agenty"; and how exactly can "wanting" be "non-agenty"? It doesn't seem to make much sense, until you think about it, right? (The "non-agenty wanting" is something like: my emotions made me to want. So I admit that I wanted, but at the same time I deny full responsibility for my wanting.)
As a stranger, what is the chance that (1) you will hear it explained in a way that will make sense to you, (2) you will remember it correctly, and (3) when the opportunity comes, you will remember to use it. Pretty much zero, I guess. Unless you decide to put an extra effort into this aspect of the langauge specifically. But considering the costs and benefits, you are extremely unlikely to do it, unless being a professional translator to Russian is extremely important for you. (Or unless you speak a Slavic language that has a similar concept, so the costs are lower for you, but even then you need a motivation to be very good at Russian.)
Now when you think about contexts, these kinds of words are likely to be used in stories, but don't appear in technical literature or official documents, etc. So if you are a Russian child, you heard them a lot. If you are a Russian-speaking foreigner working in Russia, there is a chance you will literally never hear it at the workplace.
The paper doesn't even find a statistically significant difference. The point estimate is that advanced L2 do worse than natives, but natives make almost as many mistakes.
Comment author:Creutzer
13 February 2014 08:49:10PM
*
0 points
[-]
They did found differences with the advances L2 speakers, but I guess we care about the highly advanced ones. They point out a difference at the bottom of page 18, though admittedly, it doesn't seem to be that much of a big deal and I don't know enough about statistics to tell whether it's very meaningful.
Comment author:IlyaShpitser
13 February 2014 03:33:29PM
*
0 points
[-]
Ah I see, yes you are right. That is the correct plural in this case. Sorry about that! 'Mne poslyshalos chtoto' ("something made itself heard by me") would be the singular, vs the plural above ("the steps on the roof made themselves heard by me."). Or at least I think it would be -- I might be losing my ear for Russian.
Comment author:Pfft
13 February 2014 06:46:39PM
0 points
[-]
If A learns to play the piano starting at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on average, but almost always, better at age 25 than B is at 55. Unfortunately, that's basically impossible to study while controlling for all confounders like general intelligence, quality of instruction, and number of hours spent on practice.
If all you are saying is that people who start learning a language at age 2 are almost always better at it than people who start learning the same language at age 20, I don't think anyone would disagree. The whole discussion is about controlling for confounders...
Comment author:Creutzer
13 February 2014 08:39:36PM
1 point
[-]
Yes and no - the whole discussion is actually two discussions, I think.
One is about in-principle possibility, the presence of something like a critical period, etc. There it is crucial for confounders.
The second discussion is about in-practice possibility, whether people starting later can reasonably expect to get to the same level of proficiency. Here the "confounders" are actually part of what this is about.
Comment author:bogus
12 February 2014 12:14:07PM
*
2 points
[-]
This is a tangent, but since you mention the "good founders started [programming] at 13" meme, it's a little bit relevant ...
There is a rule of thumb that achieving exceptional mastery in any specific field requires 10,000 hours of practice. This seems to be true across fields, in classical musicians, chess players, sports players, scholars/academics etc... It's a lot easier to meet that standard if you start from childhood. Note that people who make this claim in the computing field are talking about hackers, not professional programmers in a general sense. It's very possible to become a productive programmer at any age.
Comment author:JQuinton
11 February 2014 09:26:12PM
-1 points
[-]
The same tortured analysis plays out in the business world, where Paul Graham, the head of YCombinator, a startup incubator, explained that one reason his company funds fewer women-led companies is because fewer of them fit this profile of a successful founder:
If someone was going to be really good at programming they would have found it on their own. Then if you go look at the bios of successful founders this is invariably the case, they were all hacking on computers at age 13.
The trouble is, successful founders don’t run through a pure meritocracy, either. They’re supported, mentored, and funded when they’re chosen by venture capitalists like Graham. And, if everyone is working on the same model of “good founders started at 13″ then a lot of clever ideas, created by people of either gender, might get left on the table.
But even if the government were keeping better tabs on affirmative action, the bigger problem is that its jurisdiction doesn’t reach the parts of the economy where affirmative action is most desperately needed: the places where real money is made and real power is allocated. The best example of this is the industry that dominates so much of our economy today: the technology sector. Silicon Valley’s racial diversity is pretty terrible, the kind of gross imbalance that inspires special reports on CNN.
It’s a dismal state of affairs, but how could it really be otherwise? Silicon Valley isn’t just an industry; it’s a social and cultural ecosystem that grew out of a very specific social and cultural setting: mostly West Coast, upper-middle-class white guys who liked to tinker with motherboards and microchips. If you were around that culture, you became a part of it. If you weren’t, you didn’t. And because of the social segregation that pervades our society, very few black people were around to be a part of it.
Some would purport to remedy this by fixing the tech industry job pipeline: more STEM graduates, more minority internships and boot camps, etc. And that will get you changes here and there, at the margins, but it doesn’t get at the real problem. The big success stories of the Internet age—Instagram, YouTube, Twitter—all came about in similar ways: A couple of people had an idea, they got together with some of their friends, built something, called some other friends who knew some other friends who had access to friends with serious money, and then the thing took off and now we’re all using it and they’re all millionaires. The process is organic, somewhat accidental, and it moves really, really fast. And by the time those companies are big enough to worry about their “diversity,” the ground-floor opportunities have already been spoken for
Comment author:shminux
15 February 2014 06:37:02PM
*
2 points
[-]
Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)
Comment author:Bayeslisk
12 February 2014 04:02:33AM
2 points
[-]
Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.
Comment author:mwengler
12 February 2014 04:05:49PM
2 points
[-]
A freaky thing I once saw... when my daughter was about 3 there were certain things she responded to verbally, I can't remember what the thing was in this example, but something like me asking here "who is your rabbit?" and her replying "Kisses" (which was the name of her rabbit).
I had videoed some of this exchange and was playing it on a TV with her in the room. I was appalled to hear her responding "Kisses" upon hearing me on the TV saying "who is your favorite rabbit." Her response was extremely similar to her response on the video, tremendous overlap in timing tone and inflection. Maybe 20 to 50 ms off in timing (almost sounded like unison).
I really had the sense that she was a machine and it did not feel good.
Comment author:Sherincall
14 February 2014 04:09:29AM
6 points
[-]
After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.
For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.
For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.
Comment author:BloodyShrimp
18 February 2014 10:51:56PM
*
0 points
[-]
Knightian uncertainty is uncertainty where probabilities can't even be applied. I'm not convinced it exists. Some people seem to think free will is rescued by it; that the human mind could be unpredictable even in theory, and this somehow means it's "you" "making choices". This seems like deep confusion to me, and so I'm probably not expressing their position correctly.
Reductionism could be consistent with that, though, if you explained the mind's workings in terms of the simplest Knightian atomic thingies you could.
Comment author:Bayeslisk
20 February 2014 11:00:41AM
0 points
[-]
Can you give me some examples of what some people think constitutes Knightian uncertainty?
Also: what do they mean by "you"? They seem to be postulating something supernatural.
Comment author:BloodyShrimp
27 February 2014 05:16:08AM
*
0 points
[-]
I decided I should actually read the paper myself, and... as of page 7, it sure looks like I was misrepresenting Aaronson's position, at least. (I had only skimmed a couple Less Wrong threads on his paper.)
Comment author:RowanE
12 February 2014 03:55:40PM
8 points
[-]
One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.
Comment author:RowanE
15 February 2014 12:37:44PM
1 point
[-]
Well then the answer is still obviously death, and that fact has become more immediately intuitive - probably even those who disagreed with my assessment of the original question would agree with my choice given the scenario "an immortal person is tortured forever or an otherwise-immortal person dies"
Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address the aforementioned energy problem in terms of better judgment and decision-making. Basically, it seems like humanity may destroy itself (if not via climate change, then something else) if it doesn't first address its deficiencies.
However, since energy/climate issues seem pretty pressing and changing human judgment is almost purely speculative (I know CFAR is working on that sort of thing, but I'm talking about more genetic or neurological changes), civilization may become too unstable before it can take advantage from any gains from cognitive enhancement and such.On the other hand, climate change/energy issues may not end up being that big of a deal, so it's better to just focus on improving humanity to address other horrible issues as well, like inequality, psychopathic behavior, etc.
Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact -- or at the very least, one at a time. Which do you guys think may be more effective to work on?
[NOTE: I'm perfectly willing to admit that I may be completely wrong about climate change and energy issues, and that collective human judgment is in fact as good as it needs to be, and so I'm worrying about nothing and can rest easy donating to malaria charities or whatever.]
Comment author:ChristianKl
14 February 2014 12:01:53AM
2 points
[-]
Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact -- or at the very least, one at a time. Which do you guys think may be more effective to work on?
The core question is:
"What kind of impact do you expect to make if you work on either issue?"
Do you think there work to be done in the space of solar power development that other people than yourself aren't effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren't already doing?
we need to develop solar power or whatever else before all the oil and coal run out,
The problem with coal isn't that it's going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.
I know CFAR is working on that sort of thing, but I'm talking about more genetic or neurological changes)
Why? To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.
The core question is: "What kind of impact do you expect to make if you work on either issue?"
Do you think there work to be done in the space of solar power development that other people than yourself aren't effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren't already doing?
I'm familiar with questions like these (specifically, from 80000 hours), and I think it's fair to say that I probably wouldn't make a substantive contribution to any field, those included. Given that likelihood, I'm really just trying to determine what I feel is most important so I can feel like I'm working on something important, even if I only end up taking a job over someone else who could have done it equally well.
That said, I would hope to locate a "gap" where something was not being done that should be, and then try to fill that gap, such as volunteering my time for something. But there's no basis for me to surmise at this point which issue I would be able to contribute more to (for instance, I'm not a solar engineer).
To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.
At the moment, yes, but it seems like it has limited potential. I think of it a bit like bootstrapping: a judgment-impaired person (or an entire society) will likely make errors in determining how to improve their judgment, and the improvement seems slight and temporary compared to more fundamental, permanent changes in neurochemistry. I also think of it a bit like people's attempts to lose weight and stay fit. Yes, there are a lot of cognitive and behavioral changes people can make to facilitate that, but for many (most?) people, it remains a constant struggle -- one that many people are losing. But if we could hack things like that, "temptation" or "slipping" wouldn't be an issue.
The problem with coal isn't that it's going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.
From what I've gathered from my reading, the jury is kind of out on how disastrous climate change is going to be. Estimates seem to range from catastrophic to even slightly beneficial. You seem to think it will definitely be catastrophic. What have you come across that is certain about this?
Comment author:DanielLC
16 February 2014 07:34:22AM
0 points
[-]
The economy is quite capable of dealing with finite resources. If you have land with oil on it, you will only drill if the price of oil is increasing more slowly than interest. If this is the case, then drilling for oil and using the value generated by it for some kind of investment is more helpful than just saving the oil.
Climate change is still an issue of course. The economy will only work that out if we tax energy in proportion to its externalities.
We should still keep in mind that climate change is a problem that will happen in the future, and we need to look at the much lower present value of the cost. If we have to spend 10% of our economy on making it twice as good a hundred years from now, it's most likely not worth it.
Comment author:Nornagest
18 February 2014 12:29:11AM
3 points
[-]
Criticism's well and good, but 140 characters or less of out-of-context quotation doesn't lend itself to intelligent criticism. From the looks of that feed, about half of it is inferential distance problems and the other half is sacred cows, and neither one's very interesting.
If we can get anything from it, it's a reminder that killing sacred cows has social consequences. But I'm frankly tired of beating that particular drum.
Comment author:[deleted]
17 February 2014 01:14:50PM
1 point
[+]
(6
children)
Comment author:[deleted]
17 February 2014 01:14:50PM
1 point
[-]
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
Comment author:Error
17 February 2014 02:21:34PM
3 points
[-]
Or different algorithms. How long after wide release will it be before someone modifies their car's code to drive aggressively, on the assumption that cars running the standard algorithm will move out of the way to avoid an accident?
(I call this "driving like a New Yorker." New Yorkers will know what I mean.)
That's like driving without a license. Obviously the driver (software) has to be licensed to drive the car, just as persons are. Software that operates deadly machinery has to be developed in specific ways, certified, and so on and so forth, for how many decades already? (Quite a few)
Self driving cars have very complex goal metrics, along the lines of getting to the destination while disrupting the traffic the least (still grossly oversimplifying).
The manufacturer is interested in every one of his cars getting to the destination in the least time, so the cars are programmed to optimize for the sake of all cars. They're also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal. PD is problematic when agents are selfish, not when agents entirely share the goal. Think of 2 people in PD played for money, who both want to donate all proceeds to same charity. This changes the payoffs to the point where it's not PD any more.
I dunno, having a self driving jerk car takes away what ever machoism one could have about driving... there's something about a car where you can go macho and drive manual to be a jerk.
I don't think it'd help sales at all if self driving cars were causing accidents while themselves evading the collision entirely.
Comment author:TraderJoe
13 February 2014 03:25:47AM
1 point
[-]
I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.
As far as I can tell, the downsides are:
- Mild scarring on the back of the head
- Doesn’t prevent continued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis
- Cost
- Mild pain/hassle in the initial weeks.
- Possibility of finding a dodgy surgeon
The scarring is basically covered if you have a few two days’ hair growth there and I am fine with that as a long-term solution. he continued hair loss is potentially dealt with by a repeated transplant and more certainly dealt with by getting the initial transplant “all over”, i.e. thickening hair, rather than just moving the hairline forward. But it is the area I am most uncertain about. I should add that I am 29 with male pattern baldness on both sides of my family, Norwood level 4, and have seen hair loss stabilised (I have been taking propecia for the last year).
Ignoring the cost, my questions are:
- Is anyone aware of any other problems besides these?
- Do you think this solution works?
- Any ideas on how to pick the right surgeon (using someone in Singapore most probably)?
Comment author:TraderJoe
13 February 2014 09:46:20AM
0 points
[-]
This is quite far down the page, even though I posted it a few hours ago. Is that an intended effect of the upvoting/downvoting system? (it may well be - I don't understand how the algorithm assigns comment rankings)
Just below and to the right of the post there's a choice of which algorithm to use for sorting comments. I don't remember what the default is, but I do know that at least some of them sort by votes (possibly with other factors). I normally use the sorting "Old" (i.e. oldest first) and then your comment is near thhe bottom of the page since so many were posted before it.
The algorithm is a complicated mix of recency and score, but on an open thread that only lasts a week, recency is fairly uniform, so it's pretty much just score.
Comment author:EGarrett
13 February 2014 03:08:25AM
1 point
[-]
I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?
Comment author:mcoram
14 February 2014 04:17:10AM
1 point
[-]
It's certainly in the right spirit. He's reasoning backwards in the same way Bayesian reasoning does: here's what I see; here's what I know about possible mechanisms for how that could be observed and their prior probabilities; so here what I think is most likely to be really going on.
Comment author:JMiller
12 February 2014 08:49:45PM
1 point
[-]
I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.
I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different resource as well, if it contains problems that allow one to practice structurally. If you know of any such exercises, I would appreciate a link to them. I am aware that Project Euler is often advised; does it effectively teach programming skills, or just problem solving? (Then again, I am not entirely sure if there is a difference at this point in my education).
Comment author:Pfft
11 February 2014 08:00:46PM
1 point
[-]
Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?
No, you don't have to do illegal things. Another option is to convince your doctor to give you a prescription. I think people on LW greatly overestimate the difficulty of this.
I don't even mean to suggest lying. I mean something simple like "I think this drug might help me concentrate."
A formal diagnosis of ADD or narcolepsy is carte blanche for amphetamine prescription. Because it is highly scheduled and, moreover, has a big black market, doctors guard this diagnosis carefully. Whereas, modafinil is lightly scheduled and doesn't have a black market (not driven by prescriptions), so they are less nervous about giving it out in ADD-ish situations.
But doctors very much do not like it when a new patient comes in asking for a specific drug.
Comment author:RomeoStevens
12 February 2014 08:10:33AM
*
1 point
[-]
Adrafinil has additional downstream metabolites besides just modafinil, but I don't know exactly what they are. Some claim it is harder on the liver implying some of the metabolites are mildly toxic, but that's not really saying much. Lots of stuff we eat is mildly toxic. Adrafinil is generally well tolerated and if your goal is finding out the effects of modafinil on your system and you can't get modafinil itself I would say go for it. If you then decided to take moda long term I would say do more research.
IANAD. Research thoroughly and consult with a doctor if you have any medical conditions or are taking any medications.
Comment author:chaosmage
19 February 2014 10:36:45AM
*
0 points
[-]
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.
Comments (325)
Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).
Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.
Yvain has started a nootropics survey: https://docs.google.com/forms/d/1aNmqagWZ0kkEMYOgByBd2t0b16dR029BoHmR_OClB7Q/viewform
Background: http://www.reddit.com/r/Nootropics/comments/1xglcg/a_survey_for_better_anecdata/ http://www.reddit.com/r/Nootropics/comments/1xt0zn/rnootropics_survey/
I hope a lot of people take it; I'd like to run some analyses on the results.
Why is nicotine not on that list?
I have no idea. The selection isn't the best selection ever (I haven't even heard of some of them), but it can be improved for next time based on this time.
Initial results: http://slatestarcodex.com/2014/02/16/nootropics-survey-results-and-analysis/
I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.
Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.
Which of the two players has the advantage in this game?
This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.
I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.
To make sure I understand this correctly: Bob cares about winning, and getting no apples is as good as 3^^^3 apples, so long as he rejects the room with the fewest, right?
A long one-lane, no passing highway has N cars. Each driver prefers to drive at a different speed. They will each drive at that preferred speed if they can, and will tailgate if they can't. The highway ends up with clumps of tailgaters lead by slow drivers. What is the expected number of clumps?
My Answer
Imagine that you have a collection of very weird dice. For every prime between 1 and 1000, you have a fair die with that many sides. Your goal is to generate a uniform random integer from 1 to 1001 inclusive.
For example, using only the 2 sided die, you can roll it 10 times to get a number from 1 to 1024. If this result is less than or equal to 1001, take that as your result. Otherwise, start over.
This algorithm uses on average 10240/1001=10.228770... rolls. What is the fewest expected number of die rolls needed to complete this task?
When you know the right answer, you will probably be able to prove it.
Solution
If you care about more than the first roll, so you want to make lots and lots of uniform random numbers in 1, 1001, then the best die is (rot13'd) gur ynetrfg cevzr va enatr orpnhfr vg tvirf lbh gur zbfg ragebcl cre ebyy. Lbh arire qvfpneq erfhygf, fvapr gung jbhyq or guebjvat njnl ragebcl, naq vafgrnq hfr jung vf rffragvnyyl nevguzrgvp pbqvat.
Onfvpnyyl, pbafvqre lbhe ebyyf gb or qvtvgf nsgre gur qrpvzny cbvag va onfr C. Abgvpr gung, tvira gung lbh pbhyq ebyy nyy 0f be nyy (C-1)f sebz urer, gur ahzore vf pbafgenvarq gb n cnegvphyne enatr. Abj ybbx ng onfr 1001: qbrf lbhe enatr snyy ragveryl jvguva n qvtvg va gung onfr? Gura lbh unir n enaqbz bhgchg. Zbir gb gur arkg qvtvg cbfvgvba naq ercrng.
Na vagrerfgvat fvqr rssrpg bs guvf genafsbezngvba vf gung vs lbh tb sebz onfr N gb onfr O gura genafsbez onpx, lbh trg gur fnzr frdhrapr rkprcg gurer'f n fznyy rkcrpgrq qrynl ba gur erfhygf.
I give working code in "Transmuting Dice, Conserving Entropy".
Ebyy n friragrra fvqrq qvr naq n svsgl guerr fvqrq qvr (fvqrf ner ynoryrq mreb gb A zvahf bar). Zhygvcyl gur svsgl-guerr fvqrq qvr erfhyg ol friragrra naq nqq gur inyhrf.
Gur erfhyg jvyy or va mreb gb bar gubhfnaq gjb. Va gur rirag bs rvgure bs gurfr rkgerzr erfhygf, ergel.
Rkcrpgrq ahzore bs qvpr ebyyf vf gjb gvzrf bar gubhfnaq guerr qvivqrq ol bar gubhfnaq bar, be gjb cbvag mreb mreb sbhe qvpr ebyyf.
You can do better :)
Yeah, I realized that a few minutes after I posted, but didn't get a chance to retract it... Gimme a couple minutes.
Vf vg gur fnzr vqrn ohg jvgu avar avargl frira gjvpr, naq hfvat zbq 1001? Gung frrzf njshyyl fznyy, ohg V qba'g frr n tbbq cebbs. Vqrnyyl, gur cebqhpg bs gjb cevzrf jbhyq or bar zber guna n zhygvcyr bs 1001, naq gung'f gur bayl jnl V pna frr gb unir n fubeg cebbs. Guvf qbrfa'g qb gung.
I am glad someone is thinking about it enough to fully appreciate the solution. You are suggesting taking advantage of 709*977=692693. You can do better.
You can do better than missing one part in 692693? You can't do it in one roll (not even a chance of one roll) since the dice aren't large enough to ever uniquely identify one result... is there SOME way to get it exactly? No... then it would be a multiple of 1001.
I am presently stumped. I'll think on it a bit more.
ETA: OK, instead of having ONE left over, you leave TWO over. Assuming the new pair is around the same size that nearly doubles your trouble rate, but in the event of trouble, it gives you one bit of information on the outcome. So, you can roll a single 503 sided die instead of retrying the outer procedure?
Depending on the pair of primes that produce the two-left-over, that might be better. 709 is pretty large, though.
The best you can do leaving 2 over is 709*953=675677, coincidentally using the same first die. You can do better.
Brought to mind by the recent post about dreaming on Slate Star Codex:
Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?
My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.
Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
Whoa, that's cool. Do you have a reference?
Here.
Thanks!
This question reminds me of http://lesswrong.com/lw/8wi/inverse_pzombies_the_other_direction_in_the_hard/
Indeed, there is an essay in Brainstorms articulating this position. IIRC Dennett does not explicitly commit to defending it, rather he develops it to make the point that we do not have a privileged, first-person knowledge about our experiences. There is conceivable third-person scientific evidence that might lead us to accept this theory (even if, going by Yvain's comment, this does not seem to actually be the case), and our first-person intuition does not trump it.
I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).
For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as the game goes on you can refine your guess. After 10 flips the game chooses a new coin from the urn, so you won't know so much about the coin again, but try to take account of what you do know -- it's from the same urn Anise as the last coin (iid). If you try this, tell me what your average score is on play 100, say.
There's a couple other random processes to guess in the game and also a quiz. The questions are intended to force you to guess at least some of the time. If you have suggestions for other quiz questions, send them to me by PM in the format:
{q:"1+1=2. True?", a:1} // source: my calculator
where a:1 is for true and a:0 is for false.
Other discussion: probability calibration quizzes Papers: Some Comparisons among Quadratic, Spherical, and Logarithmic Scoring Rules; Bickel
This game has taught me something. I get more enjoyment than I should out of watching a random variable go up and down, and probably should avoid gambling. :)
Nice work, congrats! Looks fun and useful, better than the calibration apps I've seen so far (including one I made, that used confidence intervals - I had a proper scoring rule too!)
My score:
Thanks Emile,
Is there anything you'd like to see added?
For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don't have a way to host this, right now, though.)
Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).
Also, I'd be happy to have pointers to your calibration apps or others you've found useful.
An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.
Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.
Alternatively, be willing to move.
Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.
This is just a guess, but I think it might be helpful to include some screenshots (in color) of the programs, websites, etc. That would make them "more real" to the person who reads this. At least, save them some inconvenience. Of course, I assume that the programs and websites have a nice user interface.
It's also an opportunity for an interesting experiment: randomly send 10 resumes without the screenshorts, and 10 resumes with screenshots. Measure how many interview invitations you get from each group.
If you have a certificate from Udacity or other online university, mention that, too. Don't list is as a formal education, but somewhere in the "other courses and certificates" category.
I think ideally, you want your code running on a website where they can interact with it, but maybe a screenshot would help entice them to go to the website. Or help if you can't get the code on a website for some reason.
You want to signal a hacker mindset. Instead of focusing to include screenshots it might be more effective to write your resume in LaTeX.
It depends on your model of who will be reading your resume.
I realized that my implicit model is some half-IT-literate HR person or manager. Someone who doesn't know what LaTeX is, and who couldn't download and compile your project from Github. But they may look at a nice printed paper and say: "oh, shiny!" and choose you instead of some other candidate.
Practicing whiteboard-style interview coding problems is very helpful. The best places to work will all make you code in the interview [1] so you want to feel at-ease in that environment. If you want to do a practice interview I'd be up for doing that and giving you an honest evaluation of whether I'd hire you if I were hiring.
[1] Be very cautious about somewhere that doesn't make you code in the interview: you might end up working with a lot of people who can't really code.
If you have the skills to do software interviews well, the hardest part will be getting past resume screening. If you can, try to use personal connections to bypass that step and get interviews. Then your skills will speak for themselves.
Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP). Is this even possible? Claims seem to be contradictory.
Does anybody have recommendations on systems that actually work? Most I've seen seem like overblown claims to pump for money from desperate managers... I'm willing to put into it money if it actually can deliver.
Thank you very much.
I read around 600 wpm without ever taking speedreading lessons so with training it should be very possible.
A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found two articles. So here is a bit more info:
The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and high-school levels). He taught the method to some volunteers, who used it to teach children in Czech Rep. and Slovakia. These days the inventor of the method is dead, he started writing a book but didn't finish it, and most of the volunteers are not working in education anymore. So I was afraid the art would be lost, which would a pity. Luckily, his son finished the book, other people added their notes and experiences, and recently the method was made very popular among teachers; and in Czech Rep. the government officially suports this method (in 10% of schools). My experience with this method from my childhood (outside of the school system, in summer camps), is that it's absolutely great.
I am afraid that if I try to describe it, most of it will just sound like common sense. Examples from real life are used. Kids are encouraged to solve the problems for themselves. The teacher is just a coach or moderator; s/he helps kids discuss each other's solutions. Start with specific examples, only later move to abstract generalization of them. Let the children discover the solution; they will remember it better. In some situation specific tools are used (e.g. the basic addition and subtraction is taught by walking on a numeric axis on the floor; also see pictures here). For motivation, the specific examples are described using stories or animals or something interesting (e.g. the derivative of the function is introduced using a caterpillar climbing on hills). There is a big emphasis on keeping a good mood in the classroom.
EDIT: Classroom videos (not subtitles, but some of them should be obvious): 1st grade, 2nd grade, 3rd grade, 4th grade.
This was fun. I like how he emphasizes that every kid can figure out all of math by herself, and that thinking citizens are what you need for a democracy rather than a totalitarian state - because the Czech republic was a communist dictatorship only a generation ago, and many teachers were already teachers then.
A cultural detail which may help to explain this attitude:
In communist countries a carreer in science or education of math or physics was a very popular choice of smart people. It was maybe the only place where you could use your mind freely, without being afraid of contradicting something that Party said (which could ruin your career and personal life).
So there are many people here who have both "mathematics" and "democracy" as applause lights. But I'd say that after the end of communist regime the quality of math education actually decreased, because the best teachers suddenly had many new career paths available. (I was in a math-oriented high school when the regime ended, and most of the best teachers left the school within two years, and started their private companies or non-governmental organizations; usually somehow related to education.) Even the mathematical curriculum of prof. Hejný was invented during communism... but only in democracy his son has the freedom to actually publish it.
That's very true. Small addition: Many smart people went into medicine, too.
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
I'm well versed in simple calculus, going back to precalc to fill gaps I may have in my knowledge. I feel like I'm missing some major gaps in knowledge jumping from the undergrad to graduate level. Do any math PhDs have any advice?
Thanks!
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.
If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.
If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.
If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.
For what it's worth, I'm doing roughly the same thing, though starting with linear algebra. At first I started with multivariable calc, but when I found it too confusing, people advised me to skip to linear algebra first and then return to MVC, and so far I've found that that's absolutely the right way to go. I'm not sure why they're usually taught the other way around; LA definitely seems more like a prereq of MVC.
I tried to read Spivak's Calc once and didn't really like it much; I'm not sure why everyone loves it. Maybe it gets better as you go along, idk.
I've been doing LA via Gilbert Strang's lectures on the MIT Open CourseWare, and so far I'm finding them thoroughly fascinating and charming. I've also been reading his book and just started Hoffman & Kunze's Linear Algebra, which supposedly has a bit more theory (which I really can't go without).
Just some notes from a fellow traveler. ;-)
I think people generally agree that analysis, topology, and abstract algebra together provide a pretty solid foundation for graduate study. (Lots of interesting stuff that's accessible to undergraduates doesn't easily fall under any of these headings, e.g. combinatorics, but having a foundation in these headings will equip you to learn those things quickly.)
For analysis the standard recommendation is baby Rudin, which I find dry, but it has good exercises and it's a good filter: it'll be hard to do well in, say, math grad school if you can't get through Rudin.
For point-set topology the standard recommendation is Munkres, which I generally like. The problem I have with Munkres is that it doesn't really explain why the axioms of a topological space are what they are and not something else; if you want to know the answer to this question you should read Vickers. Go through Munkres after going through Rudin.
I don't have a ready recommendation for abstract algebra because I mostly didn't learn it from textbooks. I'm not all that satisfied with any particular abstract algebra textbooks I've found. An option which might be a little too hard but which is at least fairly comprehensive is Ash, which is also freely legally available online.
For the sake of exposure to a wide variety of topics and culture I also strongly, strongly recommend that you read the Princeton Companion. This is an amazing book; the only bad thing I have to say about it is that it didn't exist when I was a high school senior. I have other reading recommendations along these lines (less for being hardcore, more for pleasure and being exposed to interesting things) at my blog.
I feel that it's only good as a test or for review, and otherwise a bad recommendation, made worse by its popularity (which makes its flaws harder to take seriously), and the widespread "I'm smart enough to understand it, so it works for me" satisficing attitude. Pugh's "Real Mathematical Analysis" is a better alternative for actually learning the material.
Keep a file with notes about books. Start with Spivak's "Calculus" (do most of the exercises at least in outline) and Polya's "How to Solve It", to get a feeling of how to understand a topic using proofs, a skill necessary to properly study texts that don't have exceptionally well-designed problem sets. (Courant&Robbins's "What Is Mathematics?" can warm you up if Spivak feels too dry.)
Given a good text such as Munkres's "Topology", search for anything that could be considered a prerequisite or an easier alternative first. For example, starting from Spivak's "Calculus", Munkres's "Topology" could be preceded by Strang's "Linear Algebra and Its Applications", Hubbard&Hubbard's "Vector Calculus", Pugh's "Real Mathematical Analysis", Needham's "Visual Complex Analysis", Mendelson's "Introduction to Topology" and Axler's "Linear Algebra Done Right". But then there are other great books that would help to appreciate Munkres's "Topology", such as Flegg's "From Geometry to Topology", Stillwell's "Geometry of Surfaces", Reid&Szendrői's "Geometry and Topology", Vickers's "Topology via Logic" and Armstrong's "Basic Topology", whose reading would benefit from other prerequisites (in algebra, geometry and category theory) not strictly needed for "Topology". This is a downside of a narrow focus on a few harder books: it leaves the subject dry. (See also this comment.)
Maybe the most important thing to learn is how to prove things. Spivak's Calculus might be a good place to start learning proofs; I like that book a lot.
I'm doing precalculus now, and I've found ALEKS to be interesting and useful. For you in particular it might be useful because it tries to assess where you're up to and fill in the gaps.
I also like the Art of Problem Solving books. They're really thorough, and if you want to be very sure you have no gaps then they're definitely worth a look. Their Intermediate Algebra book, by the way, covers a lot of material normally reserved for Precalculus. The website has some assessments you can take to see what you're ready for or what's too low-level for you.
Are there any reasons for becoming utilitarian, other than to satisfy one's empathy?
By utilitiarian you mean:
Caring about all people equally
Hedonism, i.e. caring about pleasure/pain
Both of the above (=Bentham's classical utilitarianism)?
In any case, what answer do you expect? What would constitute a valid reason? What are the assumptions from which you want to derive this?
I mean this.
I do not expect any specific answer.
For me personally, probably nothing, since, apparently, I neither really care about people (I guess I overintellectuallized my empathy), nor about pleasure and suffering. The question, however, was asked mostly to better understand other people.
I don't know any.
2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.
Interesting recent paper: "Is ZF a hack? Comparing the complexity of some (formalist interpretations of) foundational systems for mathematics", Wiedijk; he formalizes a number of systems in Automath.
It shouldn't be that hard to find code that solves a non-linear PDE. Google search reveals http://einsteintoolkit.org/ an open source that does numerical General Relativity.
However, QFT is not a PDE, it is a completely different object. The keyword here is lattice QFT. Google reveals this gem: http://xxx.tau.ac.il/abs/1310.7087
Nonperturbative string theory is not completely understood, however all known formulations reduce it to some sort of QFT.
I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").
I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.
BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.
I don't think it has already been posted here on LW, but SMBC has a wonderful little strip about UFAI: http://www.smbc-comics.com/?id=3261#comic
It's a repost from last week.
Though rereading it, does anyone know whether Zach knows about MIRI and/or lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work, which is probably not what the scientist is worried about.
I expect "unfriendly human-created Intelligence " to parse to HAL and Skynet to regular people.
The use of "friendly" to mean "non-dangerous" in the context of AI is, I believe, rather idiosyncratic.
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of humans could be both "obviously" not conscious but be mistaken by others as conscious. Does every human on the planet who reaches "voice mail jail," voice text interactive systems, are they all aware that they have not reached a consciousness? Do even those of us who are aware forget sometimes when we are not being careful? Is this going to become even a harder distinction to make as tech continues to get better?
I have been enjoying the television show "almost human." In this show there are androids, most of which have been designed to NOT be too much like humans, although what they are really like is boring rule-following humans. It is clear in this show that the value on an android "life" is a tiny fraction of the value on a "human" life, in the first episode a human cop kills his android partner in order to get another one. The partner he does get is much more like a human, but still considered the property of the police department for which he works, and nobody really has much of a problem with this. Ironically, this "almost human" android partner is African American.
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.
Suppose you mean lossless compression. The compressed program has ALL the same outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely zero effect on the interaction of the creature with its environment ever evolve? There would be no mechanism for it to evolve, there is no basis on which to select for it. It seems to me that to believe in the possibility of p-zombies is to believe in the supernatural, a world of phenomena such as consciousness that for some reason is not allowed to be listed as a phenomenon of the natural world.
At the moment, I can't really distinguish how a belief that p-zombies are possible is any different from a belief in the supernatural.
Years ago I thought an interesting experiment to do in terms of artificial consciousness would be to build an increasingly complex verbal simulation of a human, to the point where you could have conversations involving reflection with the simulation. At that point you could ask it if it was conscious and see what it had to say. Would it say "not so far as I can tell?"
The p-zombie assumption is that it would say "yeah I'm conscious duhh what kind of question is that?" But the way a simulation actually gets built is you have the list of requirements and you keep accreting code until all the requirements are met. If your requirements included a vast array of features but NOT the feature that it answer this question one way or another, conceivably you could elicit an "honest" answer from your sim. If all such sims answers "yes," you might conclude that somehow in the collection of features you HAD required, consciousness emerged, and you could do other experiments where you removed features from the sim and kept statistics on how those sims answered that question. You might see the sim saying "no, don't think so." and conclude that whatever it is in us that makes us function as conscious we hadn't found that thing yet and put it in our list of requirements.
I haven't thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn't seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it's a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn't show that consciousness can't play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn't mean that consciousness can't cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects of our input-output map, then I think the chances of there being another mechanism that works to get the same input-output map are equal to the chances that you could program a car to drive from here to Los Angeles without using any feedback mechanisms, by just dialing in all the stops and starts and turns and so on that it would need ahead of time. Formally possible, but absolutely bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in the sense of Feynman.
A few points:
1) Initial mind uploading will probably be lossy, because it needs to convert analog to digital.
2) I don't know if even lossless compression of the whole input-output map is going to preserve everything. Let's say you have ten seconds left to live. Your input-output map over these ten seconds probably doesn't contain many interesting statements about consciousness, but that doesn't mean you're allowed to compress away consciousness. And even on longer timescales, people don't seem to be very good at introspecting about consciousness, so all your beliefs about consciousness might be compressible into a small input-output map. Or at least we can't say that input-output map is large, unless we figure out more about consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88's point that consciousness might not be the smallest possible program that can fill that role.
4) I'm not sure that consciousness is just about the input-output map. Doesn't it feel more like internal processing? I seem to have consciousness even when I'm not talking about it, and I would still have it even if my religion prohibited me from talking about it. Or if I was mute.
It depends on whether you subscribe to materialism. If you do then there nothing to measure. Conscious might even be a tricky illusion as Dennett suggests.
If on the other hand you do believe that there something beyond materialism there are plenty of frameworks to choose from that provide ideas about what one could measure.
OMG then someone should get busy! Tell me what I can measure and if it makes any kind of sense I will start working on it!
I do have a qualia for perceiving whether someone else is present in a meditation or is absent minded. It could be that it's some mental reactions that picks up microgestures or some other thing that I don't consciously perceive and summarizes that information into a qualia for mental presence.
Investigating how such a qualia works is what I would do personally when I would want to investigate consciousness.
But you probably have no such qualia, so you either need someone who has or develop it yourself. In both cases that probably means seeking a good meditation teacher.
It's a difficult subject to talk about in a medium like this where people who are into a spiritual framework that has some model of what conscious happens to be have phenomenological primitives that the audience I'm addressing doesn't have. In my experience most of the people who I consider capable in that regard are very unwilling to talk about details with people who don't have phenomenological primitives to make sense of them. Instead of answering a question directly a Zen teacher might give you a koan and tell you to come back in a month when you build the phenomenological primitives to understand it, expect that he doesn't tell you about phenomenological primitives.
An interesting quote, I wonder what people here will make of it...
source
I can't tell if the author means "rationalists" in the technical sense (i.e. as opposed to empiricists) but if he doesn't then I think it's unfair of him to require that rationalists "eliminate intuition and navigate life by reasoning about it", since this is so clearly irrational (because intuition is so indispensably powerful).
I loved this quote. I think it's a characterization of UU-style humanism that is fair but that they would probably agree with.
I am going to organize a coaching course to learn Javascript + Node.js.
My particular technology of choice is node.js because:
I wanted to learn modern web technologies for a while, but haven't gotten myself to actually do it. When I tried to start learning, I was overwhelmed by the number of things I still have to learn to get anything done. Here's the bare minimum:
I believe the optimum course of action is to hire a guru to do coaching for me and several other students and split the cost. The benefits compared to learning by yourself are:
The capabilities that I want to achieve are:
i. To be able to add functionality to my Tumblr blog (where I run a writing prompt) by either using custom theme + Tumblr API or extracting posts via API and using them to render my blog on a separate website. node.js is definitely not needed here, rather than this is the simplest case of doing something useful that I need to with web technologies and node.js is my web technology of choice.
ii. To hack on Undum, a client-side hypertext interactive fiction framework. My thoughts on why I think Undum and IF are cool are here.
iii. To create new experiments that utilize modern web technologies to interesting and novel effect. I know that this sounds really vague, but the point is that sometimes you never know what can be done until you learn the relevant skills. One example of the kind of thing that I think about is what this paper is talking about:
Friend's advice: Skype Premium + Dropbox + Piratepad + Slideshare + Doodle should be enough. What do you think?
Want to join? Questions? Suggestions for better videoconferencing software than Skype?
I would suggest using AngularJs instead, since it can be purely client-side code, you don't need to deal with anything server-side.
There are also some nice online development environments like codenvy that can provide a pretty rich environment and I belieave have some collaburative features too (instead of using dropbox, doodle and slideshare, maybe).
If all those technologies seem intimidating, some strategies:
EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?
Bitcoin economy and a possible violation of the efficent market hypososis. With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per annum. Depositors are not actually trading themselves, so the only way failure modes I can see is if the exchange takes the money and runs, if there is a catastrophic failure of the trading engine, or if they get hacked. I Gwern estimates that a Bitcoin exchange has a 1% chance of failure per month based upon past performance, but that was written some time ago, and the increased legal recognition of Bitcoin plus people learning from mistakes should decrease this probability. On the other hand the biggest exchange MtGox froze withdrawals a few days ago, but note that they claim that this is a temporary technical fault. As additional information, Bitfinex's website states "The company is incorporated in Hong Kong as a Limited Liability Corporation.", which would seem to decrease the likelihood of the company stealing the money. In conclusion, even assuming a pessimistic 1% chance of failure per month I reach a conservative estimate of 65% APR expected returns (assuming that the interest is constant at the lower 0.17% figure) . So why aren't people flocking to the website, starting a bidding war to drive the interest rate down to a tenth of its current value? Unless there is something wrong with my previous calculations, the best explanation I can think of is that it simply has not generated enough publicity. Perhaps also everyone in the Bitcoin community is assuming the price is going to increase by 10000%, or they are looking for the next big altcoin, or they are daytrading, but either way a boring but safe option doesn't seem so interesting. In conclusion, this seems to be an example where the efficent market hypothosis does not hold, due to insufficent propagation of information.
Disclaimers: I don't have shares in Bitfinex, and I hope this doesn't look like spam. This is a theoretical discussion of the EMH, not finanal advice, and if you lose your money I am not responsible. I'm not sure whether this deserves its own post outside of discussion – please let me know.
The exchange can just fail in a large variety of ways and close (go bankrupt). If you're not "insured" you are exposed to the trading risk and insurance costs what, about 30%? and, of course, it doesn't help you with the exchange counterparty risk.
There is risk that is baked in from the fact that depositors are on the hook if trades can not be unwound quickly enough, and because this is Bitcoins, where volatility is crazy there is even more of this risk.
For example assume you lend money for some trader to go long, and now say that suddenly prices drop so quickly that it puts the trader beyond a margin call, in fact it puts him at liquidation, oh uh...the traders margin wallet is now depleted, who makes up the balance, the lenders. They actually do mention this on their website. But they don't tell you what the margin call policy is. This is a really important part of the risk. If they allow a trader to only put up $50 of a $100 position and call you in when your portion hits 25% that would be normal for something like index equities but pretty insane for something like Bitcoin.
How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain things, even when you get away with them.
Do you think your conscience would cease to bother you if you could know for a fact that there were no other living creatures feeling pain around you? In what other cases does a true solipsistic world make your behavior distinct from a non-solipsistic one?
I'm certainly comfortable with violent fantasy when the roles are acted out. This suggests to me that if I were convinced that certain person-seeming things were not alive, conscious, were not what they seemed that this might tip me in to some violent behaviors. I think at minimum I would experiment with it, try a slap here, a punch there. And where I went from there would depend on how it felt I suppose.
Also I would almost certainly steal more stuff if I was convinced that everything was landscape.
In fantasies you're in total control. Same applies to video games for example. Risk of severe retaliation isn't a real.
Well, the obvious difference would be that non-solipsists might care about what happens after they die, and act accordingly.
When I was younger and studying analytical philosophy, I noticed the same thing. Unless solipsism morphs into apathy, there are still 'representations' you can't control and that you can care about. Unless it alters your values, there should be no difference in behaviour too.
If I didn't care about other people, I wouldn't worry about donating to charities that actually help people. I'd donate a little to charities that make me look good, and if I'm feeling guilty and distracting myself doesn't seem to be cost-effective, I'd donate to charities that make me feel good. I would still keep quite a bit of my money for myself, or at least work less.
As it is, I've figured that other people matter, and some of them are a lot cheaper to make happy than me, so I decided that I'm going to donate pretty much everything I can to the best charity I can find.
I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability Option 1: $4 definitely Option 2: $6 or $3 Option 3: $8 or $2 Option 4: $10 or $1 Option 5: $12 or $0
I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?
Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.
More info: marginal utility
Here's one interesting way of viewing it that I once read:
Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).
As a poker player, the idea we always batted back and forth was that Expected Value doesn't change over shorter sample sizes, including a single trial. However you may have a risk of ruin or some external factor (like if you're poor and given the option of being handed $1,000,000 or flipping a coin to win $2,000,001).
Barring that, if you're only interested in maximizing your result, you should follow EV. Even in a single trial.
That depends on your utility function, specifically your risk tolerance. If you're risk-neutral, option 5 has the highest value, otherwise it depends.
Clearly option 5 has the higest mean outcome. If you value money linearly (that is, $12 is exactly 3 times as good as $4, and there's no special utility threshold along the way (or disutility at $0), it's the best option.
For larger values, your value for money may be nonlinear (meaning: the difference between $0 and $50k may be much much larger than the difference between $500k and $550k to your happiness), and then you'll need to convert the payouts to subjective value before doing the calculation. Likewise if you're in a special circumstance where there's a threshold value that has special value to you - if you need $3 for bus fare home, then option 1 or 2 become much more attractive.
That depends on the amount of background money and randomness you have.
Although I can't really see any case where I wouldn't pick option five. Even if that's all the money I will ever have, my lifespan, and by extension my happiness, will be approximately linear with time.
If you specify that I get that much money each day for the rest of my life, and that's all I get, then I'd go for something lower risk.
Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?
Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.
So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).
A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.
As far as I can tell...nothing. Most likely, there are simply many LessWrongers (like me) that disagree with E.Y. on this point.
What leads you to believe that you disagree with Eliezer on this point? I suspect that you are just going by the title. I just read the essay and he endorses lots of practices that others call Devil's Advocacy. I'm really not sure what practice he is condemning. If you can identify a specific practice that you disagree with him about, could you describe it in your own words?
I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:
Several nations are available to play:
Next session is this Sunday; PM me for details.
Additionally, playing in an MP campaign offers all sorts of opportunities for sharpening your writing skills through stories set in the alternate history!
If you play in this game, you get to play with not one, but two LWers! I am Spain, beacon of learning, culture, and industry.
Other than the alternate start, are there any mods?
Yes, we have redistributed the RGOs for great balance, and stripped out the nation-specific decisions.
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly negative, mired in what I think are irrational kneejerk reactions to defend "what it means to be human." So I'm just like, you know what? Fuck it. You can't even help yourselves help yourselves. Forget it.
Thoughts?
You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?
Me neither.
lol yeah, I know what you're talking about.
Okay okay, fine. ;-)
A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.
https://en.wikipedia.org/wiki/Identifiable_victim_effect
Also, would you still want to save a drowning dog even if it might bite you out of fear and misunderstanding? (let's say it is a small dog and a bite would not be drastically injurious)
If you think helping humanity is (in long term) a futile effort, because humans are so stupid they will destroy themselves anyway... I'd say the organization you are looking for is CFAR.
So, how would you feel about making a lot of money and donating to CFAR? (Or other organization with a similar mission.)
How cool, I've never heard of CFAR before. It looks awesome. I don't think I'm capable of making a lot of money, but I'll certainly look into CFAR.
Edit: I just realized that CFAR's logo is at the top of the site. Just never looked into it. I am not a smart man.
I can't speak for you, but I would hugely prefer for humanity to not wipe itself out, and even if it seems relatively likely at times, I still think it's worth the effort to prevent it.
If you think existential risks are a higher priority than parasite removal, maybe you should focus your efforts on those instead.
I think it is amazingly myopic to look at the only species that has ever started a fire or crafted a wheel and conclude that
The idea that climate change is an existential risk seems wacky to me. It is not difficult to walk away from an ocean which is rising at even 1 m a year and no one hypothesizes anything close to that rate. We are adapted to a broad range of climates and able to move north south east and west as the winds might blow us.
Running out of fossil fuels, thinking we are doing something wildly stupid with our use of fossil fuels seems to me to be about as sensible as thinking a centrally planned economy will work better. It is not intuitive that a centrally planned economy will be a piece of crap compared to what we have, but it turns out to be true. Thinking you or even a bunch of people like you with no track record doing ANYTHING can second guess the markets in fossil fuels, well it seems intuitively right but if you ever get involved in testing your intuitions I don't think you'll find out it holds up. And if you think even doubling the price of fossil fuels really changes the calculus by much, I think Europe and Japan have lived that life for decades compared to the US, and yet the US is the home to the wackiest and ill-thought-out alternatives to fossil fuels in the world.
Can anybody explain to me why creating a wildly popular luxury car which effectively runs on burning coal is such a boon to the environment that it should be subsidized at $7500 by the US federal government and an additional $2500 by states such as California which has been so close to bankruptcy recently? Well that is what a Tesla is, if you drive one in a country with coal on the grid, and most of Europe, China, and the US are in that category, The Tesla S Performance puts out the same amount of carbon as a car getting (WRONG14WRONG) 25 mpg of gasoline.
It's not difficult to walk away from an ocean? Please explain New Orleans.
Tesla (and other stuff getting power from the grid) currently run mostly on coal but ideally they can be run off (unrealistically) solar or wind or (realistically) nuclear.
<i>It's not difficult to walk away from an ocean? Please explain New Orleans.</i>
Are you under the impression that climate change rise in ocean level will look like a dike breaking? All references to sea levels rising are reported at less than 1 cm a year, but lets say that rises 100 fold to 1 m/yr. New Orleans flooded a few meters in at most a few days, about 1 m/day.
A factor of 365 in rate could well be the subtle difference between finding yourself on the roof of a house and finding yourself living in a house a few miles inland.
The Tesla S takes about 38 kW-hr to go 100 miles, which works out to around 80 lb CO2 generated. 14mpg would be 7.1 gallons of gasoline to go 100 miles, which works out to around 140lb CO2 generated. I couldn't find any independent numbers for the S Performance, but Tesla's site claims the same range as the regular S with the same battery pack.
The rest of your point seems to hold, though; if the subsidy is predicated on reducing CO2 emissions then the equivalent of 25mpg still isn't anything to brag about.
This is likely an overestimation, since it assumes that you're exclusively burning coal. Electricity production in the US is about 68% fossil, the rest deriving from a mixture of nuclear and renewables; the fossil-fuel category also includes natural gas, which per your link generates about 55-60% the CO2 of coal per unit electricity. This varies quite a bit state to state, though, from almost exclusively fossil (West Virginia; Delaware; Utah) to almost exclusively nuclear (Vermont) or renewable (Washington; Idaho).
Based on the same figures and breaking it down by the national average of coal, natural gas, and nuclear and renewables, I'm getting a figure of 43 lb CO2 / 100 mi, or about 50 mpg equivalent. Since its subsidies came up, California burns almost no coal but gets a bit more than 60% of its energy from natural gas; its equivalent would be about 28 lb CO2.
Taboo humanity.
I find it fascinating to observe.
If you're looking for ways to eliminate existential risk, then knowing that humanity is about to kill itself no matter what you do and you're just putting it off a few years instead of a few billion matters. If you're just looking for ways to help individuals, it's pretty irrelevant. I guess it means that what matters is what happens now, instead of the flow through effects after a billion years, but it's still a big effect.
If you're suggesting that the life of the average human isn't worth living, then saving lives might not be a good idea, but there are still ways to help keep the population low.
Besides, if humanity was great at helping itself, then why would we need you? It is precisely the fact that we allow extreme inequality to exist that means that you can make a big difference.
I think you underrate the existential risks that come along with substantial genetic or neurological enhancements. I'm not saying we shouldn't go there but it's no easy subject matter. It requires a lot of thought to address it in a way that doesn't produce more problems than it solves.
For example the toolkit that you need for genetic engineering can also be used to create artificial pandemics which happen to be the existential risk most feared by people in the last LW surveys.
When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years. The sun doesn't shine the whole day so there's still further work to be done, but it doesn't seem like an insurmountable challenge.
It's true, I absolutely do. It irritates me. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a "supervirus" or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons. But steering towards a possibly better humanity (or other sentient species) just seems worth the risk to me when the alternative is remaining the violent apes we are. (I know we're hominds, not apes; it's just a figure of speech.)
That's certainly a reassuring statistic, but a less reassuring one is that solar power currently supplies less than one percent of global energy usage!! Changing that (and especially changing that quickly) will be an ENORMOUS undertaking, and there are many disheartening roadblocks in the way (utility companies, lack of government will, etc.). The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.
Of course the ethics are obvious. The road to hell is paved with good intentions. 200 years ago burning all those fossil fuels to power steam engines sounded like a really great idea.
If you simply try to solve problems created by people adopting technology by throwing more technology at it, that's dangerous.
The wise way is to understand the problem you are facing and do specific intervention that you believe to help. CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.
CFAR style rationality training might seem less technological to you. That's actually a good thing because it makes it easier to understand the effects.
It depends on what issue you want to address. Given how things are going technology involves in a way where I don't think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.
On the other hand we don't want to turn that coal. I want to eat tuna that's not full of mercury and there already a recommendation from the European Food Safety Authority against eating tuna every day because there so much mercury in it. I want less people getting killed via fossil fuel emissions. I also want to have less greenhouse gases in the atmosphere.
If you want to do policy that pays off in 50 years looking at how things are at the moment narrows your field of vision too much.
If solar continues it's price development and is 1/8 as cheap in 21 years you won't need government subsidies to get people to prefer solar over coal. With another 30 years of deployment we might not burn any coal in 50 years.
If you think lack of government will or utility companies are the core problem, why focus on changing human neurology? Addressing politics directly is more straightforward.
When it comes to solar power it might also be that nobody will use any solar panels in 50 years because Craig Venter's algae are just a better energy source. Betting to much on single cards is never good.
It's a start, and potentially fewer side effects is always good, but think of it this way: who's going to gravitate towards rationality training? I would bet people who are already more rational than not (because it's irrational not to want to be more rational). Since participants are self-selected, a massive part of the population isn't going to bother with that stuff. There are similar issues with genetic and neurological modifications (e.g. they'll be expensive, at least initially, and therefore restricted to a small pool of wealthy people), but given the advantages over things like CFAR I've already mentioned, it seems like it'd be worth it...
I have another issue with CFAR in particular that I'm reluctant to mention here for fear of causing a shit-storm, but since it's buried in this thread, hopefully it'll be okay. Admittedly, I only looked at their website rather than actually attending a workshop, but it seems kind of creepy and culty--rather reminiscent of Landmark, for reasons not the least of which is the fact that it's ludicrously, prohibitively expensive (yes, I know they have "fellowships," but surely not that many. And you have to use and pay for their lodgings? wtf?). It's suggestive of mind control in the brainwashing sense rather than rationality. (Frankly, I find that this forum can get that way too, complete with shaming thought-stopping techniques (e.g. "That's irrational!"). Do you (or anyone else) have any evidence to the contrary? (I know this is a little off-topic from my question -- I could potentially create a workshop that I don't find culty -- but since CFAR is currently what's out there, I figure it's relevant enough.)
You could be right, but I think that's rather optimistic. This blog post speaks to the problems behind this argument pretty well, I think. Its basic gist is that the amount of energy it will take to build sufficient renewable energy systems demands sacrificing a portion of the economy as is, to a point that no politician (let alone the free market) is going to support.
This brings me to your next point about addressing politics instead of neurology. Have you ever tried to get anything changed politically...? I've been involved in a couple of movements, and my god is it discouraging. You may as well try to knock a brick wall down with a feather. It basically seems that humanity is just going to be the way it is until it is changed on a fundamental level. Yes, I know society has changed in many ways already, but there are many undesirable traits that seem pretty constant, particularly war and inequality.
As for solar as opposed to other technologies, I am a bit torn as to whether it might be better to work on developing technologies rather than whatever seems most practical now. Fusion, for instance, if it's actually possible, would be incredible. I guess I feel that working on whatever's practical now is better for me, personally, to expend energy on since everything else is so speculative. Sort of like triage.
I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.
This is a tangent, but since you mention the "good founders started [programming] at 13" meme, it's a little bit relevant ...
I find it deeply bizarre that there's this idea today among some programmers that if you didn't start programming in your early teens, you will never be good at programming. Why is this so bizarre? Because until very recently, there was no such thing as a programmer who started at a young age; and yet there were people who became good at programming.
Prior to the 1980s, most people who ended up as programmers didn't have access to a computer until university, often not until graduate school. Even for university students, relatively unfettered access to a computer was an unusual exception, found only in extremely hacker-friendly cultures such as MIT.
Put another way: Donald Knuth probably didn't use a computer until he was around 20. John McCarthy was born in 1927 and probably couldn't have come near a computer until he was a professor, in his mid-20s. (And of course Alan Turing, Jack Good, or John von Neumann couldn't have grown up with computers!)
(But all of them were mathematicians, and several of them physicists. Knuth, for one, was also a puzzle aficionado and a musician from his early years — two intellectual pursuits often believed to correlate with programming ability.)
In any event, it should be evident from the historical record that people who didn't see a computer until adulthood could still become extremely proficient programmers and computer scientists.
I've heard some people defend the "you can't be good unless you started early" meme by comparison with language acquisition. Humans generally can't gain native-level fluency in a language unless they are exposed to it as young children. But language acquisition is a very specific developmental process that has evolved over thousands of generations, and occurs in a developmentally-critical period of very early childhood. Programming hasn't been around that long, and there's no reason to believe that a critical developmental period in early adolescence could have come into existence in the last few human generations.
So as far as I can tell, we should really treat the idea that you have to start early to become a good programmer as a defensive and prejudicial myth, a bit of tribal lore arising in a recent (and powerful) subculture — which has the effect of excluding and driving off people who would be perfectly capable of learning to code, but who are not members of that subculture.
Seems to me that using computers since your childhood is not necessary, but there is something which is necessary, and which is likely to be expressed in childhood as an interest in computer programming. And, as you mentioned, in the absence of computers, this something is likely to be expressed as an interest in mathematics or physics.
So the correct model is not "early programming causes great programmers", but rather "X causes great programmers, and X causes early programming; therefore early programming correlates with great programmers".
Starting early with programming is not strictly necessary... but these days when computers are almost everywhere and they are relatively cheap, not expressing any interest in programming during one's childhood is an evidence this person is probably not meant to be a good programmer. (The only question is how strong this evidence is.)
Comparing with language acquisition is wrong... unless the comparison is true for mathematics. (Is there a research on this?) Again, the model "you need programming acquisition as a child" would be wrong, but the model "you need math acquisition as a child, and without this you later will not grok programming" might be correct.
Yeah, I think this is explicitly the claim Paul Graham made, with X = "deep interest in technology".
The only aspect of language with a critical period is accent. Adults commonly achieve fluency. In fact, adults learn a second language faster than children.
As far as I know, the degree to which second-language speakers can acquire native-like competence in domains other than phonetics is somewhat debated. Anecdotally, it's a rare person who manages to never make a syntactic error that a native speaker wouldn't make, and there are some aspects of language (I'm told that subjunctive in French and aspect in Slavic languages may be examples) that may be impossible to fully acquire for non-native speakers.
So I wouldn't accept this theoretical assertion without further evidence; and for all practical purposes, the claim that you have to learn a language as a child in order to become perfect (in the sense of native-like) with it is true.
Not my downvotes, but you're probably getting flak for just asserting stuff and then demanding evidence for the opposing side. A more mellow approach like "huh that's funny I've always heard the opposite" would be better received.
Indeed, I probably expressed myself quite badly, because I don't think what I meant to say is that outrageous: I heard the opposite, and anecdotally, it seems right - so I would have liked to see the (non-anecdotal) evidence against it. Perhaps I phrased it a bit harshly because what I was responding to was also just an unsubstantiated assertion (or, alternatively, a non-sequitur in that it dropped the "native-like" before fluency).
Links? As far as I know it's not debated.
That's, ahem, bullshit. Why in the world would some features of syntax be "impossible to fully acquire"?
For all practical purposes it is NOT true.
You may easily know more about this issue than me, because I haven't actually researched this.
That said, let's be more precise. If we're talking about mere fluency, there is, of course, no question.
But if we're talking about actually native-equivalent competence and performance, I have severe doubts that this is even regularly achieved. How many L2 speakers of English do you know who never, ever pick an unnatural choice from among the myriad of different ways in which the future can be expressed in English? This is something that is completely effortless for native speakers, but very hard for L2 speakers.
The people I know who are candidates for that level of proficiency in an L2 are at the upper end of the intelligence spectrum, and I also know a non-dumb person who has lived in a German-speaking country for decades and still uses wrong plural formations. Hell, there's people who are employed and teach at MIT and so are presumably non-dumb who say things like "how it sounds like".
The two things I mentioned are semantic/pragmatic, not syntactic. I know there is a study that shows L2 learners don't have much of a problem with the morphosyntax of Russian aspect, and that doesn't surprise me very much. I don't know and didn't find any work that tried to test native-like performance on the semantic and pragmatic level.
I'm not sure how to answer the "why" question. Why should there be a critical period for anything? ... Intuitively, I find that semantics/pragmatics, having to do with categorisation, is a better candidate for something critical-period-like than pure (morpho)syntax. I'm not even sure you need critical periods for everything, anyway. If A learns to play the piano starting at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on average, but almost always, better at age 25 than B is at 55. Unfortunately, that's basically impossible to study while controlling for all confounders like general intelligence, quality of instruction, and number of hours spent on practice. (The piano example would be analogous more to the performance than the competence aspect of language, I suppose.)
There is a study about Russian dative subjects that suggests even highly advanced L2 speakers with lots of exposure don't get things quite right. Admittedly, you can still complain that they don't separate the people who have lived in a Russian-speaking country for only a couple of months from those who have lived there for a decade.
The thing about the subjunctive is, at best, wrong, but certainly not bullshit. The fact that it was told to me by a very intelligent French linguist about a friend of his whose L2-French is flawless except for occasional errors in that domain is better evidence for that being a very hard thing to acquire than your "bullshit" is against that.
You are committing the nirvana fallacy. How many native speakers of English never make mistakes or never "pick an unnatural choice"?
For example, I know a woman who immigrated to the US as an adult and is fully bilingual. As an objective measure, I think she had the perfect score on the verbal section of the LSAT. She speaks better English than most "natives". She is not unusual.
Tell your French linguist to go into countryside and listen to the French of the uneducated native speakers. Do they make mistakes?
I'm not talking about performance errors in general. I'm talking about the fact that it is extremely hard to acquire native-like competence wrt the semantics and pragmatics of the ways in which English allows one to express something about the future.
Your utterance of this sentence severely damages your credibility with respect to any linguistic issue. The proper way to say this is: she speaks higher-status English than most native speakers. Besides, the fact that she gets perfect scores on some test (whose content and format is unknown to me), which presumably native speakers don't, suggests that she is far from an average individual anyway.
Also, that you're not bringing up a single relevant study that compares long-time L2 speakers with native speakers on some interesting, intricate and subtle issue where a competence difference might be suspected leaves me with a very low expectation of the fruitfulness of this discussion, so maybe we should just leave it at that. I'm not even sure to what extent we aren't simply talking past each other because we have different ideas about what native-like performance means.
They don't, by definition; not the way you probably mean it. I wouldn't know why the rate of performance errors should correlate in any way with education (controlling for intelligence). I also trust the man's judgment enough to assume that he was talking about a sort of error that stuck out because a native speaker wouldn't make it.
I don't think so. This looks like an empirical question -- what do you mean by "extremely hard"? Any evidence?
No, I still don't think so -- for either of your claims. Leaving aside my credibility, non-black English in the United States (as opposed to the UK) has few ways to show status and they tend to be regional, anyway. She speaks better English (with some accent, to be sure) in the usual sense -- she has a rich vocabulary and doesn't make many mistakes.
While that is true, your claims weren't about averages. Your claims were about impossibility -- for anyone. An average person isn't successful at anything, including second languages.
I don't know if anybody has ever studied this - I would be surprised if they had -, so I have only anecdotal evidence from the uncertainty I myself experience sometimes when choosing between "will", "going to", plain present, "will + progressive", and present progressive, and from the testimony of other highly advanced L2 speakers I've talked to who feel the same way - while native speakers are usually not even aware that there is an issue here.
How exactly is "rich vocabulary" not high-status? (Also, are you sure it actually contains more non-technical lexemes and not just higher-status lexemes?) I'm not exactly sure what you mean by "mistakes". Things that are ungrammatical in your idiolect of English?
I actually made two claims. The one was that it's not entirely clear that there aren't any such in-principle impossibilities, though I admit that the case for them isn't very strong. I will be very happy if you give me a reference surveying some research on this and saying that the empirical side is really settled and the linguists who still go on telling their students that it isn't are just not up-to-date.
The second is that in any case, only the most exceptional L2 learners can in practice expect to ever achieve native-like fluency.
Bonus points for giving a specific example, which helped me to understand your point, and at this moment I fully agree with you. Because I understand the example; my own language has something similar, and wouldn't expect a stranger to use this correctly. The reason is that it would be too much work to learn properly, for too little benefit. It's a different way to say things, and you only achieve a small difference in meaning. And even if you asked a non-linguist native, they would probably find it difficult to explain the difference properly. So you have little chance to learn it right, and also little motivation to do.
Here is my attempt to explain the examples from the link, pages 3 and 4. (I am not a Russian language speaker, but my native language is also Slavic, and I learned Russian. If I got something wrong, please correct me.)
"ya uslyshala ..." = "I heard ..."
"mne poslyshalis ..." = "to-me happened-to-be-heard ..."
"ya xotel ..." = "I wanted ..."
"mne xotelos ..." = "to-me happened-to-want ..."
That's pretty much the same meaning, it's just that the first variant is "more agenty", and the second variant is "less agenty", to use the LW lingo. But that's kinda difficult to explain explicitly, becase... you know, how exactly can "hearing" (not active listening, just hearing) be "agenty"; and how exactly can "wanting" be "non-agenty"? It doesn't seem to make much sense, until you think about it, right? (The "non-agenty wanting" is something like: my emotions made me to want. So I admit that I wanted, but at the same time I deny full responsibility for my wanting.)
As a stranger, what is the chance that (1) you will hear it explained in a way that will make sense to you, (2) you will remember it correctly, and (3) when the opportunity comes, you will remember to use it. Pretty much zero, I guess. Unless you decide to put an extra effort into this aspect of the langauge specifically. But considering the costs and benefits, you are extremely unlikely to do it, unless being a professional translator to Russian is extremely important for you. (Or unless you speak a Slavic language that has a similar concept, so the costs are lower for you, but even then you need a motivation to be very good at Russian.)
Now when you think about contexts, these kinds of words are likely to be used in stories, but don't appear in technical literature or official documents, etc. So if you are a Russian child, you heard them a lot. If you are a Russian-speaking foreigner working in Russia, there is a chance you will literally never hear it at the workplace.
The paper doesn't even find a statistically significant difference. The point estimate is that advanced L2 do worse than natives, but natives make almost as many mistakes.
They did found differences with the advances L2 speakers, but I guess we care about the highly advanced ones. They point out a difference at the bottom of page 18, though admittedly, it doesn't seem to be that much of a big deal and I don't know enough about statistics to tell whether it's very meaningful.
'mne poslyshalos' I think. This one has connotations of 'hearing things,' though.
Note: "Mne poslyshalis’ shagi na krishe." was the original example; I just removed the unchanging parts of the sentences.
Ah I see, yes you are right. That is the correct plural in this case. Sorry about that! 'Mne poslyshalos chtoto' ("something made itself heard by me") would be the singular, vs the plural above ("the steps on the roof made themselves heard by me."). Or at least I think it would be -- I might be losing my ear for Russian.
If all you are saying is that people who start learning a language at age 2 are almost always better at it than people who start learning the same language at age 20, I don't think anyone would disagree. The whole discussion is about controlling for confounders...
Yes and no - the whole discussion is actually two discussions, I think.
One is about in-principle possibility, the presence of something like a critical period, etc. There it is crucial for confounders.
The second discussion is about in-practice possibility, whether people starting later can reasonably expect to get to the same level of proficiency. Here the "confounders" are actually part of what this is about.
What do you mean by "theoretical"? Is this just an insult you fling at people you disagree with?
There is a rule of thumb that achieving exceptional mastery in any specific field requires 10,000 hours of practice. This seems to be true across fields, in classical musicians, chess players, sports players, scholars/academics etc... It's a lot easier to meet that standard if you start from childhood. Note that people who make this claim in the computing field are talking about hackers, not professional programmers in a general sense. It's very possible to become a productive programmer at any age.
A similar argument was presented in an article at Slate: Affirmative action doesn’t work. It never did. It’s time for a new solution.:
Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)
Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.
A freaky thing I once saw... when my daughter was about 3 there were certain things she responded to verbally, I can't remember what the thing was in this example, but something like me asking here "who is your rabbit?" and her replying "Kisses" (which was the name of her rabbit).
I had videoed some of this exchange and was playing it on a TV with her in the room. I was appalled to hear her responding "Kisses" upon hearing me on the TV saying "who is your favorite rabbit." Her response was extremely similar to her response on the video, tremendous overlap in timing tone and inflection. Maybe 20 to 50 ms off in timing (almost sounded like unison).
I really had the sense that she was a machine and it did not feel good.
After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.
For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.
For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.
These are both pretty much exactly what I'm thinking of! The feeling that someone (or you!) is/are a terrifyingly predictable black box.
My goal in life is to become someone so predictable that you can figure out what I'll do just by calculating what choice would maximize utility.
That seems eminently exploitable and consequently extremely dangerous. Safety and unexpected delight lie in unpredictability.
This doesn't seem related to reductionism to me, except in that most reductionists don't believe in Knightian free will.
Sort of in the sense of human minds being more like fixed black boxes that one might like to think. What's Knightian free will, though?
Knightian uncertainty is uncertainty where probabilities can't even be applied. I'm not convinced it exists. Some people seem to think free will is rescued by it; that the human mind could be unpredictable even in theory, and this somehow means it's "you" "making choices". This seems like deep confusion to me, and so I'm probably not expressing their position correctly.
Reductionism could be consistent with that, though, if you explained the mind's workings in terms of the simplest Knightian atomic thingies you could.
Can you give me some examples of what some people think constitutes Knightian uncertainty? Also: what do they mean by "you"? They seem to be postulating something supernatural.
Again, I'm not a good choice for an explainer of this stuff, but you could try http://www.scottaaronson.com/blog/?p=1438
Thanks! I'll have a read through this.
I decided I should actually read the paper myself, and... as of page 7, it sure looks like I was misrepresenting Aaronson's position, at least. (I had only skimmed a couple Less Wrong threads on his paper.)
Would you prefer that one person be horribly tortured for eternity without hope or rest, or that 3^^^3 people die?
One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.
What if the 3^^^3 people were one immortal person?
Well then the answer is still obviously death, and that fact has become more immediately intuitive - probably even those who disagreed with my assessment of the original question would agree with my choice given the scenario "an immortal person is tortured forever or an otherwise-immortal person dies"
Being horribly tortured is worse than death, so I'd pick death.
Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address the aforementioned energy problem in terms of better judgment and decision-making. Basically, it seems like humanity may destroy itself (if not via climate change, then something else) if it doesn't first address its deficiencies.
However, since energy/climate issues seem pretty pressing and changing human judgment is almost purely speculative (I know CFAR is working on that sort of thing, but I'm talking about more genetic or neurological changes), civilization may become too unstable before it can take advantage from any gains from cognitive enhancement and such.On the other hand, climate change/energy issues may not end up being that big of a deal, so it's better to just focus on improving humanity to address other horrible issues as well, like inequality, psychopathic behavior, etc.
Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact -- or at the very least, one at a time. Which do you guys think may be more effective to work on?
[NOTE: I'm perfectly willing to admit that I may be completely wrong about climate change and energy issues, and that collective human judgment is in fact as good as it needs to be, and so I'm worrying about nothing and can rest easy donating to malaria charities or whatever.]
The core question is: "What kind of impact do you expect to make if you work on either issue?"
Do you think there work to be done in the space of solar power development that other people than yourself aren't effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren't already doing?
The problem with coal isn't that it's going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.
Why? To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.
I'm familiar with questions like these (specifically, from 80000 hours), and I think it's fair to say that I probably wouldn't make a substantive contribution to any field, those included. Given that likelihood, I'm really just trying to determine what I feel is most important so I can feel like I'm working on something important, even if I only end up taking a job over someone else who could have done it equally well.
That said, I would hope to locate a "gap" where something was not being done that should be, and then try to fill that gap, such as volunteering my time for something. But there's no basis for me to surmise at this point which issue I would be able to contribute more to (for instance, I'm not a solar engineer).
At the moment, yes, but it seems like it has limited potential. I think of it a bit like bootstrapping: a judgment-impaired person (or an entire society) will likely make errors in determining how to improve their judgment, and the improvement seems slight and temporary compared to more fundamental, permanent changes in neurochemistry. I also think of it a bit like people's attempts to lose weight and stay fit. Yes, there are a lot of cognitive and behavioral changes people can make to facilitate that, but for many (most?) people, it remains a constant struggle -- one that many people are losing. But if we could hack things like that, "temptation" or "slipping" wouldn't be an issue.
From what I've gathered from my reading, the jury is kind of out on how disastrous climate change is going to be. Estimates seem to range from catastrophic to even slightly beneficial. You seem to think it will definitely be catastrophic. What have you come across that is certain about this?
The economy is quite capable of dealing with finite resources. If you have land with oil on it, you will only drill if the price of oil is increasing more slowly than interest. If this is the case, then drilling for oil and using the value generated by it for some kind of investment is more helpful than just saving the oil.
Climate change is still an issue of course. The economy will only work that out if we tax energy in proportion to its externalities.
We should still keep in mind that climate change is a problem that will happen in the future, and we need to look at the much lower present value of the cost. If we have to spend 10% of our economy on making it twice as good a hundred years from now, it's most likely not worth it.
Would just like to make sure everyone here is aware of LessWrong.txt
Why?
Criticism's well and good, but 140 characters or less of out-of-context quotation doesn't lend itself to intelligent criticism. From the looks of that feed, about half of it is inferential distance problems and the other half is sacred cows, and neither one's very interesting.
If we can get anything from it, it's a reminder that killing sacred cows has social consequences. But I'm frankly tired of beating that particular drum.
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
Already deployed is a better example: computer network protocols.
Or different algorithms. How long after wide release will it be before someone modifies their car's code to drive aggressively, on the assumption that cars running the standard algorithm will move out of the way to avoid an accident?
(I call this "driving like a New Yorker." New Yorkers will know what I mean.)
That's like driving without a license. Obviously the driver (software) has to be licensed to drive the car, just as persons are. Software that operates deadly machinery has to be developed in specific ways, certified, and so on and so forth, for how many decades already? (Quite a few)
Self driving cars have very complex goal metrics, along the lines of getting to the destination while disrupting the traffic the least (still grossly oversimplifying).
The manufacturer is interested in every one of his cars getting to the destination in the least time, so the cars are programmed to optimize for the sake of all cars. They're also interested in getting human drivers to buy their cars, which also makes not driving like a jerk a goal. PD is problematic when agents are selfish, not when agents entirely share the goal. Think of 2 people in PD played for money, who both want to donate all proceeds to same charity. This changes the payoffs to the point where it's not PD any more.
<cynicism>Depends on who those humans are. For a large fraction of low-IQ young males...</cynicism>
I dunno, having a self driving jerk car takes away what ever machoism one could have about driving... there's something about a car where you can go macho and drive manual to be a jerk.
I don't think it'd help sales at all if self driving cars were causing accidents while themselves evading the collision entirely.
I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.
As far as I can tell, the downsides are: - Mild scarring on the back of the head - Doesn’t prevent continued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis - Cost - Mild pain/hassle in the initial weeks. - Possibility of finding a dodgy surgeon
The scarring is basically covered if you have a few two days’ hair growth there and I am fine with that as a long-term solution. he continued hair loss is potentially dealt with by a repeated transplant and more certainly dealt with by getting the initial transplant “all over”, i.e. thickening hair, rather than just moving the hairline forward. But it is the area I am most uncertain about. I should add that I am 29 with male pattern baldness on both sides of my family, Norwood level 4, and have seen hair loss stabilised (I have been taking propecia for the last year).
Ignoring the cost, my questions are: - Is anyone aware of any other problems besides these? - Do you think this solution works? - Any ideas on how to pick the right surgeon (using someone in Singapore most probably)?
This is quite far down the page, even though I posted it a few hours ago. Is that an intended effect of the upvoting/downvoting system? (it may well be - I don't understand how the algorithm assigns comment rankings)
Just below and to the right of the post there's a choice of which algorithm to use for sorting comments. I don't remember what the default is, but I do know that at least some of them sort by votes (possibly with other factors). I normally use the sorting "Old" (i.e. oldest first) and then your comment is near thhe bottom of the page since so many were posted before it.
The algorithm is a complicated mix of recency and score, but on an open thread that only lasts a week, recency is fairly uniform, so it's pretty much just score.
I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?
http://www.youtube.com/watch?v=wLaRXYai19A
It's certainly in the right spirit. He's reasoning backwards in the same way Bayesian reasoning does: here's what I see; here's what I know about possible mechanisms for how that could be observed and their prior probabilities; so here what I think is most likely to be really going on.
I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.
I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different resource as well, if it contains problems that allow one to practice structurally. If you know of any such exercises, I would appreciate a link to them. I am aware that Project Euler is often advised; does it effectively teach programming skills, or just problem solving? (Then again, I am not entirely sure if there is a difference at this point in my education).
Thanks for the help!
Awesome, thanks so much! If you were to recommend one of these resources to begin with, which would it be?
Happy to help!
I like both Project Euler and 99 Haskell problems a lot. They're great for building success spirals.
Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?
No, you don't have to do illegal things. Another option is to convince your doctor to give you a prescription. I think people on LW greatly overestimate the difficulty of this.
Some info on getting a prescription here: http://www.bulletproofexec.com/q-a-why-i-use-modafinil-provigil/
I think ADD/ADHD will likely be a harder sell; my impression is that people are already falsely claiming that in order to get Adderall etc.
I don't even mean to suggest lying. I mean something simple like "I think this drug might help me concentrate."
A formal diagnosis of ADD or narcolepsy is carte blanche for amphetamine prescription. Because it is highly scheduled and, moreover, has a big black market, doctors guard this diagnosis carefully. Whereas, modafinil is lightly scheduled and doesn't have a black market (not driven by prescriptions), so they are less nervous about giving it out in ADD-ish situations.
But doctors very much do not like it when a new patient comes in asking for a specific drug.
See Gwern's page.
Adrafinil has additional downstream metabolites besides just modafinil, but I don't know exactly what they are. Some claim it is harder on the liver implying some of the metabolites are mildly toxic, but that's not really saying much. Lots of stuff we eat is mildly toxic. Adrafinil is generally well tolerated and if your goal is finding out the effects of modafinil on your system and you can't get modafinil itself I would say go for it. If you then decided to take moda long term I would say do more research.
IANAD. Research thoroughly and consult with a doctor if you have any medical conditions or are taking any medications.
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.