Open Thread, May 19 - 25, 2014
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (289)
Yann LeCun, head of Facebook's AI-lab, did an AMA on /r/MachineLearning/ a few days ago. You can find the thread here.
In response to someone asking "What are your biggest hopes and fears as they pertain to the future of artificial intelligence?", LeCun responds that:
EDIT: I didn't see this one the first time. In response to someone asking "What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)", LeCun says that:
I'd love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about "AI risks", and why they disagree, if they disagree.
Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of "5 people in a basement launch a seed AI, which then turns the world into computronium". These are vastly different perceptions, and I personally find myself somewhere between those positions.
LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.
Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You'll have to ask specific questions.
I have the privilege of working with a small group of young (12-14) highly gifted math students for 45 minutes a week for the next 5 weeks. I have extraordinary freedom with what we cover. Mathematically, we've covered some game theory and Bayes' theorem. I've also had a chance to discuss some non-mathy things, like Anki.
I only found out about Anki after I'd taken a bunch of courses, and I've had to spend a bunch of time restudying everything I'd previously learned and forgotten. It would have been really nice if someone had told me about Anki when I was 12.
So, what I want to ask Lesswrong, since I suspect most of you are like the kids I'm working with except older, is what blind spots did 12-14-year-old you have I could point out to the kids I'm working with?
Heh, if I was 12-14 these days, the main message I would send to me would be: Start making and publishing mobile games while you have a lot of free time, so when you finish university, you have enough passive income that you don't have to take a job, because having a job destroys your most precious resources: time and energy.
(And a hyperlink or two to some PUA blogs. Yeah, I know some people object against this, but this is what I would definitely send to myself. Sending it to other kids would be more problematic.)
I would recommend Anki only for learning languages. For other things I would recommend writing notes (text documents); although this advice may be too me-optimized. One computer directory called "knowledge", subdirectories per subject, files per topic -- that's a good starting structure; you can change it later, if you need. But making notes becomes really important at the university level.
I would stress the importance of other things than math. Gifted kids sometimes focus on their strong skills, and ignore their weak skills -- they put all their attention to where they receive praise. This is a big mistake. However, saying this without providing actionable advice does not help. For example, my weak spots were exercise and social skills. For social skills a list of recommended books could help; with emphasis that I should not only read the books, but also practice what I learned. For exercise, a simple routine plus HabitRPG could do the job. Maybe to emphasise that I should not focus on how I compare with others, but how I compare with yesterday's me.
Something about an importance of keeping contact with smart people, and insanity of the world in general. As a smart person, talking with other smart people increases your powers: both because you develop with them the ideas you understand, and because you can ask them about things you don't understand. (A stupid person will not understand what you are saying, and will give you harmful advice about things you asked.) In school you are supposed to work alone, but in real life a lot of success is achieved by teams; but the best teams are composed of good people, not of random people.
Another advice that is risky to give to other kids: Religion is bullshit and a waste of time. People will try to manipulate you, using lies and emotional pressure. Whatever other positive traits they have, try to find other people that have the same positive traits, but without the mental poison; even if it takes more time, it's worth it.
I don't know how much of this falls under your remit, but I had quite a few educational blind-spots I inherited from my parents, who didn't come from a higher-educated background. If any of your students are in a similar position, it's worth checking they don't have any ludicrous expectations out of the next several years of education which no-one close to them is in a position to correct.
Blind spots such as?
I'm not sure any specific examples from my own experience would generalise very well.
If I were to translate my comment into a specific piece of generally-applicable advice, it would be to give students a realistic overview of what their forthcoming formal education involves, what it expects from them, and what options they have available.
As mentioned, this may be outside of the OP's remit.
The specific examples may not be used, but would clarify what sort of thing you're talking about.
One example: certain scholastic activities are simply less important than others. If your model is "everything given to me by an authority figure is equally important", you don't manage your workload so well.
I think most of my blindspots before roughly the age of 18 involved not understanding that I'm personally responsible for my success and the extent of my knowledge and that "good enough" doesn't cut it. If I were to send a message back to 14-year-old!Me, I'd tell him that he has a lot of potential, but that he can't rely on others to fulfill that potential.
Speaking as a somewhat gifted seventeen year old, I'd really like to have known about AoPS, HPMOR and the Sequences.
Also, I'd like to have had in my mind the notion that my formal education is not optimised for me, and that I really need to optimise it myself. Speaking more concretely, I think that most teenagers in Britain pick their A Levels (if they do them at all) based on what classes the other people around them are doing, which isn't very useful. Speaking to a friend though, I realised that when he was picking his third A Level to study, there was no other A Level he needed to study to get into his main area of specialisation (jazz musician), and his time would be better spent not doing the A level at all; he needed to think more meta. He was just doing an A level because that's what everyone seems to think you should do. I'm about to give up a class because it's not going to help me get anywhere, I can use the time better and learn what I want to better alone anyway. So, really optimise.
Don't know if that helps. And AoPS is ridiculously useful.
I have been in such a program when I was 12-14 (run by the William Stern foudnation in Hamburg, Germany) and the curriculum consisted mostly of very diverse 'math' problems prepared in a way to make them accessible to us in a func way without introducing too much up-front terminology or notation. Examples I remember of the spot:
turing machines (dresses as short-sighted busy beavers)
generalized Nim really with lots of matches
tilings of the plane
conveys game of life (easy on paper)
More I just looked up in an older folder:
distance metrics on a graph
multi-way balances
continuous fractions (cool for approximations; I still use this)
logical derivations about beliefs of people whose dream are indistinuishable from reality
generalized magical squares
Fibinacci sequences and http://en.wikipedia.org/wiki/Missing_square_puzzle
Drawing fractals (the iterated function ones; with printouts of some)
In general only an exposition was given and no task to solve. Or some introductory initial questions, The patterns to be detected were the primary reward.
We were not introduced to really practical applications but I'm unsure whether that had been helpful or rather whether it had been interesting. My interest at that time stemmed from the material being systematic patterns that I could approach abstractly and symbolically and 'solve'. I'm not clear whether the Sequences would have been interesting in that way. Their patterns are clear only in hindsight.
What should work is Bayes rule - at least in the form that can be visualized (tiling of the 1/1 grid) or symbolcally derived easily.
Also guessing and calibration games should work. You can also take standard games and add some layer of complexity on them (but please not arbitrary but helpful ones; a minimum example is: Play Uno but cards don't have to match color+number but some number theoretic identity e.g. +(2,5) modulo (4,10)).
I assume you mean Conway's game of life.
Yes of course. That and we tried variations of the rule-set. We also discovered the flyer.
It is interesting what can come out of this seed. When I later had an Atari I wrote an optimized simulator in assembly which aggregated over multiple cells and I even tried to use the blitter reducing the number of clock cycles per cell as far as I could. This seed become a part of the mosaic of concepts that sits behind understanding complex processes now.
That sounds interesting. Would you care to elaborate?
The story goes as follows (translated from German):
"Once I dreamed that there was an island called "the island of dreams". The inhabitants of the island dreamed very vivid and lucid. Indeed the imaginations which occurred during sleep are as clear and present as perceived during waking. Even more their dreamlife follows from night to night the same continuity as their waking perception during the day. Consequently some inhabitants have difficulties to distinguish whether they are awake or asleep.
Now every inhabitant belongs to one of two groups: Day-type and night-type. The inhabitants of day-type are characterized by their thinking during the day being true and during the night being false. For the night-type it is the opposite: Their thoughts during sleep are true and those during waking are false."
Questions:
...
Just curious -- are you teaching at a math camp? Which one? (I have a lot of friends from Canada/USA Mathcamp, although I didn't go myself.)
No. I know one of my former teachers outside of school, and we decided it would be a good thing if I ran an afterschool program for the mathcounts kids after it had ended.
Downvoted for dismissing the humanities.
There's a difference between choosing a subject as your college major (which amounts to future employment signalling) and engaging in the study of a subject.
One can read in one's spare time or learn languages or act. If one does not come from wealth not majoring in something remunerative in college is a mistake if you will actually want money later.
He didn't dismiss the humanities he said studying them at university was a poor decision.
My parents made me study business management instead of literature. My life has been much more boring and unfulfilling as a result, because the jobs I can apply for don't interest me, and the jobs I want demand qualifications I lack. In my personal experience, working in your passion beats working for the money.
How sure are you what your life would have been like if you had studied literature instead?
Why haven't you gone back to college for a Masters in English Literature or something along those lines? Robin Hanson was 35 before he got his Ph.D. in Economics and he's doing ok. The market for humanities scholars is not as forgiving as that for Economics but that's what you want, right?
After some years of self-analysis and odd jobs, I'm close to finishing a second degree in journalism.
Moreover, it wasn't really presented as general advice, but advice for their own younger version. It's not generally applicable advice (not everyone will be happy or successful in STEM fields), but I think it's safe to assume it is sound advice for Young!nydwracu.
Or even if it was intended as generally applicable advice, it's still directed at kids gifted at mathematics, who will have a high likelihood of enjoying STEM fields.
It was a blind spot that I had until my senior year of college, when I realized that I wanted to make a lot of money, and that it was very unlikely that majoring in philosophy would let me do so. Had I realized this at 12-14, I would've saved myself a lot of time; but I didn't, so I'm probably going to have to go back for another degree.
If you don't care about money or you have the connections to succeed with a non-STEM degree, that's another thing. But that's not the question that was asked.
Some actionable advice: Keep written notes about people (don't let them know about that). For every person, create a file that will contain their name, e-mail, web page, facebook link, etc., and the information about their hobbies, what you did together, whom they know, etc. Plus a photo.
This will come very useful if you haven't been in contact with the person for years, and want to reconnect. (Read the whole file before you call them, and read it again before you meet them.) Bonus points if you can make the information searchable, so you can ask queries like "Who can speak Japanese?" or "Who can program in Ruby?".
This may feel a bit creepy, but many companies and entrepreneurs do something similar, and it brings them profit. And the people on the other side like it (at least if they don't suspect you to use a system for this). Simply think about your hard disk as your extended memory. There would be nothing wrong or creepy if you simply remembered all this stuff; and there are people with better memory who would.
Maybe make some schedule to reconnect with each person once in a few years, so they don't forget you completely. This also gives you an opportunity to update the info.
If you start doing it while young, your high-school and university classmates will already make a decent database. Then add your colleagues. You will appreciate it ten years later, when you would naturally forget most of them.
When you have a decent database, you can provide useful social service by connecting people. -- Your friend X asks you: "Do you know something who can program in Ruby?" "Uhm, not sure, but let me make a note and I'll think about it." Go home, look at the database. Find Y. Ask Y whether it is okay to give their contact to someone interested in Ruby. Give X contact to Y. At this moment, your friend X owes you a favor, and if X and Y do some successful business, also Y owes you a favor. The cost of you is virtually zero; apart from costs of maintaining the database, which you would do anyway.
An important note is that of course there is a huge difference between close friends and random acquaintances, but both can be useful in some situations, so you want to keep a database for both. Don't be selective. If your database has too much people, think about better navigation, but don't remove items.
I'm inclined to ask: Are there ready-made software solutions for this or should I roll my own in Python or some office program? If it wasn't for the secretive factor I'd write a simple program to put on my github and show off programming skills.
I don't know. But if I really did it (instead of just talking that this is the wise thing to do), I would probably use some offline wiki software. Preferably open source. Or at least something I can easily extract data from if I change my mind later.
I would use something like wiki -- nodes connected by hyperlinks -- because I tried this in the past with hierarchical structure, and it didn't work well. Sometimes a person is a member of multiple groups, which makes classification difficult. Or if you have a few dozen people in the database, it becomes difficult to navigate (which in turn becomes a trivial inconvenience for adding more people, which defeats the whole purpose).
But if every person (important or unimportant) has their own node, and you also create nodes for groups (e.g. former high school classmates, former colleagues from company X, rationalists,...), you can find anyone with two clicks: click on the category, click on the name. Also the hyperlinks would be useful to describe how people are connected with each other. It would be also nice to have automatic collections of nodes that have some atrribute (e.g. can program in Ruby); but you can manually add the links in both directions.
A few years ago I looked at some existing software, a lot of it was nice, but missed a feature or two I considered important. (For example, didn't support Unicode, or required web server, or just contained too many bugs.) In hindsight, if I would just use one of them, for example the one that didn't support Unicode, it would still be better than not having any.
Writing your own program... uhm, consider planning fallacy. Is this the best way to use your time? And by the way, if you do something like that, make it a general-purpose offline Unicode wiki-like editor, so that people can also use it for many other things.
The OpenWorm Kickstarter ends in a few hours, and they're almost to their goal! Pitch in if you want to help fund the world's first uploads.
Update: They made it.
This is a test posting to determine the time zone of the timestamps, posted at 09:13 BST / 08:13 UTC.
ETA: it's UTC.
I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I'll just write it here.
Imagine the most unexpected sequence of bits. What would it look like? Well, probably not what you'd expect, by definition, right? But let's be more specific.
By "expecting" I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most unexpected sequence is one where the machine makes the most guesses wrong; preferably all of them.
More precisely: The prediction machine starts with imagining all possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would not generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.
However, our sequence is generated in defiance to the prediction machine. We actualy don't have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor's diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.
There is this technical detail, that the prediction machine may answer "I don't know" if exactly half of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let's say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it's 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output "0", and we will add 1 to the end of the sequence.)
So... at the beginning, there is no way to predict the first bit, so the machine says "I don't know" and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the "only 0's" hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in "010101..."), so the first three bits are 011. -- And I don't dare to speculate about the following bits.
The exact sequence depends on how exactly the prediction machine defines the "algorithms that generate the sequence of bits" (the technical details of the language these algorithms are written in), but can still something be said about these "most unexpected" sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.
"What is the specific pattern of bits?" and "Give a vague description that applies to both this pattern and asymptotically 100% of possible patterns of bits" are very different questions. You're asking the machine the first question and the human the second question, so I'm not surprised the answers are different.
In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.
Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.
In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn't guess any. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.
Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.
Shouldn't AIXI include itself (for all inputs) recursively? If so I don't think your sequence is well defined.
No, AIXI isn't computable and so does not include itself as a hypothesis.
Oh, I see.
Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.
You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.
I think adding a little meta-probability will help.
Since there's some probability of the sequence being "the most surprising", this would basically mean that several of the most surprising end up with basically the same probability. For example, if it takes n bits of data to define "the most surprising m-bit sequence", then there must be a 2^-n chance of that happening. Since there are 2^m sequences, and the most surprising sequence must have a probability of at most 2^-m, there must be at least 2^(m-n) most surprising sequences.
I have briefly thought about this idea in the context of password selection and password crackers: the "most unexpected" string (of some maximum length) is a good password. No deep reasoning here though.
Just in case anyone wants pointers to existing mathematical work on "unpredictable" sequences: Algorithmically random sequences (wikipedia)
My prediction machine can maximize it's expected minimum score by outputting random guesses. Then your bitstring is precisely the complement of my random string, and therefore drawn from the random distribution.
Does "most unexpected" differ from "least predictable" in any way? Seems like a random number generator would match any algorithm around 50% of the time so making an algorithm less predictable than that is impossible no?
If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?
The latter. The former is already decreasing at an incredible speed but I see no trend for the latter.
The formed is a major existential risk, while the latter is probably going to be solved soon(er), so the former.
Good point! Then again, a lot of the existential risks we talk about have to do with accidental extinction, not caused by aggression per se.
The former. Stopping ageing without giving us time to prepare for it would cause all sorts of problems in terms of increasing population. Whereas stopping violence would accelerate progress no end (if only for the resources it freed up).
Stopping aging (preferably, reversing aging) would also free up a lot of resources.
On that note, a 2006 article in The Scientist argues that simply slowing aging by seven years would produce large enough of an economic benefit to justify the US investing three billion dollars annually to this research. One excerpt:
I'm much more likely to die of aging than of violence; so I'd rather stop aging.
This seems to generalize well to the rest of humanity. I am surprised that most others who replied disagrees. ISTM that most existential risks are not due to deliberate violence, but rather unintended consequences.
Ending aging would almost certainly greatly diminish human-on-human violence, since increasing expected lifespans would lower time preference. Right?
Then again, if you had more to lose, maybe that would increase your incentive to protect yourself by getting the other guy before he gets you.
I would assume there's a sorting effect-- people would tend to figure out eventually that it's better to live among low-violence people.
One big question is... ok, we want anti-aging, but what age do you aim for? 17 has some advantages, but how about 25? 35? 50?
I've read that cell death overtakes cell division at around 35, so perhaps a body in some longer-term equilibrium condition would look 35?
(I suspect that putting a single age on is too crude though. The optimal age for a set of lungs may not be the same as that for a liver)
Optimal age is also relative to what you want to do-- different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 67 (edited-- was 53), will your verbal ability increase as much as if you let yourself age to 67?
Athletic abilities don't all peak at the same time, either. Strength doesn't peak at the same time as strength-to-weight ratio. Would you rather be a weightlifter or a gymnast? I believe coordination peaks late-- how do you feel about dressage?
Staying physically 25 doesn't mean you have to stop learning or physically developing. Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?
I don't think we know. I have no idea why verbal ability would peak so late, so I don't know whether brain changes associated with aging are part of the process.
I don't think it works that way. Currently most human-on-human violence is committed by young people (specifically young men), who by this logic should have the lowest time preference, since they can expect to have the most years left to live.
So, depending on how much of this decrease in violence with age is biological and how much is memetic, stopping aging (assuming it would lead to a large drop in the birth rate) may increase or decrease the total violence in the long run (as the chronological age of the population increases but its biological age decreases).
It would also depend on how anti-aging works. Suppose that every stage of life is made longer. If young male violence is mostly biological, then some young men would be violent for a few more years.
My problem with these questions is that it sorta gets difficult quickly. If you stopped aging today, I imagine there would very quickly be overpopulation issues and many old patients in hospitals wouldn't die etc. and yet I am finding it difficult to think of major issues with the ending of violence (boxing champions would be out of a job). And even now, I'm sure someone's thought of a counter example, and then the discussion would be harder. And so even though I think that aging is more important than violence as a focus, the question asks a hypothetical that is never going to occur (being able to just make that decision, I mean) and takes us away from reality into the nitty/gritty of a literal non-problem.
Why did you ask?
Edit: I didn't mean to make a case for either side, I was trying to suggest that the question itself seems unhelpful. We'll end up with a complicated technical discussion which is unlikely to have any practical value.
Sure does!
I don't count that as violence -- it is consensual (and there's a modicum of not-always-successful effort to prevent permanent harm).
This has been discussed at great depth and refuted, e.g. by Max More and de Grey.
No particular reason: Every now and then a thought come to mind.
If you take into account the risk of permanent brain damage, boxing (as well as rugby/football) is sacrificeable.
To give a sense of proportion: suppose that tomorrow, we developed literal immortality - not just an end to aging, but also prevented anyone from dying from any cause whatsoever. Further suppose that we could make it instantly available to everyone, and nobody would be so old as to be beyond help. So the death rate would drop to zero in a day.
Even if this completely unrealistic scenario were to take place, the overall US population growth would still only be about half of what it was during the height of the 1950s baby boom! Even in such a completely, utterly unrealistic scenario, it would still take around 53 years for the US population to double - assuming no compensating drop in birth rates in that whole time.
DragonBox has been mentioned on this site a few times, so I figured that people might be interested knowing in that its makers have come up with a new geometry game, Elements. It's currently available for Android and iOS platforms.
Geometry used to be my least favorite part of math and as a result, I hardly remember any of it. Playing this game with that background is weird: I don't really have a clue of what I'm doing or what the different powers represent, but they do have a clear logic to them, and now that I'm not playing, I find myself automatically looking for triangles and quadrilaterals (had to look up that word!) in everything that I see. Plus figuring out what the powers do represent makes for an interesting exercise.
I'd be curious to hear comments from anyone who was already familiar with Euclid before this.
Not an expert, but Euclid made some mistakes, like using superposition to prove some theorems. I'm curious how they handle those. (e.g. I think Euclid attempted to prove side-angle-side congruence, but Hilbert had to include it as an axiom.)
I'm reading the "You're Calling Who A Cult Leader?" again, and now the answer seems obvious.
"I publicly express strong admiration towards the work of Person X." -- What could possibly be wrong about this? Why are our instincts screaming at us not to do this?
Well, assigning a very high status to someone else is dangerous for pretty much the same reason as assigning a very high status to yourself. (With possible exception if the person you admire happens to be the leader of the whole tribe. Even so, who are you to speak about such topics? As if your opinion had any meaning.) You are challenging the power balance in the tribe. Only instead of saying "Down with the current tribe leader; I should be the new leader!" you say "Down with the current tribe leader; my friend here should be the new leader!"
Either way, the current tribe leader is not going to like. Neither his allies. Neither neutral people, who merely want to prevent another internal fight where they have nothing to gain. All of them will tell you to shut up.
There is nothing bad per se about suggesting that e.g. Douglas R. Hofstadter should be the king of the nonconformist tribe. Maybe we can't unite behind this king, but neither can we unite behind any competitor, so... why not. At worst, some of us will ignore him.
The problem is, we live in a context of a larger society that merely tolerates us, and we know it. Praise Hofstadter too high and someone outside of our circle may notice it. And suddenly the rest of the tribe might decide that it is going to get rid of our ill-mannered faction once and for all. (Not really, but this is what would happen in the ancient jungle.) So we better police ourselves... unless we are ready to take the fight with the current leadership.
Being a strong fan of Douglas R. Hofstadter means challenging those who are strong fans of e.g. Brad Pitt. There is only so much place at the top of the status ladder, and our group is not strong enough to nominate even the highest-status one among us. So we rather not act like we are ready for open confrontation.
The irony is that if Douglas Hofstadter or Paul Graham or Eliezer Yudkowsky actually had their small cults, if they acted like dictators within the cult and ignored the rest of the world, the rest of the world would not care about them. Maybe people would even invent rationalizations about why everything is okay, and why anyone is free to follow anyone or anything. -- The problem starts with suggesting that they could somehow be important in the outside world; that the outside world has a reason to listen to them. That upsets people; the power change that might concern them. Cultish behavior well-contained within the cult doesn't. Saying that all nerds should read Hofstadter, that's okay. -- Saying that even non-nerds lose something valuable when they don't read something written by a member of our faction... now that's a battle call. (Are you suggesting that Hofstadter deserves a similar status to e.g. Dostoyevsky? Are you insane or what? Look at the size of your faction, our faction, and think again.)
I'm not sure about this - the "Yay Hofstadter" team looks about as big as the "Yay Dostoyevsky" team, at least especially in the anglophone internet.
Bad example, perhaps. Try some big names from anglophone literature.
Shakespeare? Okay, maybe too old. Gone with the Wind? Something that is officially blessed and taught at schools as the literature. Something that perhaps not many people enjoy, but almost everyone perceives that is has an officially high status. The thing you suggest should be replaced by Hofstadter.
I think you're overcomplicating it. People like Eliezer Yudkowsky and Paul Graham are certainly not cult leaders, but they have many strong opinions that are well outside the mainstream; they don't believe in, and in fact actively scorn, hedging/softening their expression of these opinions; and they have many readers, a visible subset of whom uncritically pattern all their opinions, mainstream or not, after them.
And pushback against excitement over Hofstadter can stem from legitimate disagreement about the importance/interestingness of his work. The pushback is proportional to the excitement that incites it.
I was talking to the loved one about this last night. She is going for ministry in the Church of England. (Yes, I remain a skeptical atheist.)
She is very charismatic (despite her introversion) and has the superpower of convincing people. I can just picture her standing up in front of a crowd and explaining to them how black is white, and the crowd each nodding their heads and saying "you know, when you think about it, black really is white ..." She often leads her Bible study group (the sort with several translations to hand and at least one person who can quote the original Greek) and all sorts of people - of all sorts of intelligence levels and all sorts of actual depths of thinking - get really convinced of her viewpoint on whatever the matter is.
The thing is, you can form a cult by accident. Something that looks very like one from the outside, anyway. If you have a string of odd ideas, and you're charismatic and convincing, you can explain your odd ideas to people and they'll take on your chain of logic, approximately cut'n'pasting them into their minds and then thinking of them as their own thoughts. This can result in a pile of people who have a shared set of odd beliefs, which looks pretty damn cultish from the outside. Note this requires no intention.
As I said to her, "The only thing stopping you from being L. Ron Hubbard is that you don't want to. You better hope that's enough."
(Phygs look like regular pigs, but with yellow wings.)
Disagreed. IMO, there should only be kings if there's a good reason... among other things, I suspect that status differences are epistemologically harmful. See Stanley Milgram's research and the Asch conformity experiment.
I also disagree with the rest of your analysis. I anticipate a different sense of internal revulsion when someone starts talking to me about why Sun Myung Moon is super great vs why Mike Huckabee is so great or why LeBron James is so great. In the case of LW, I think people whose intuitions say "cult" are correct to a small degree... LW does seem a tad insular, groupthink-ish, and cultish to me, though it's still one of my favorite websites. And FWIW, I would prefer that people who think LW seems cultish help us improve (by contributing intelligent dissent and exposing us to novel outside thinking) instead of writing us off.
(The most charitable interpretation of the flaws I see in LW is that they are characteristics that trade off against some other things we value. E.g. if we downvoted sloppy criticism of LW canon less, that would mean we'd get more criticism of LW canon, both sloppy and high-quality... not clear whether this would be good or not, though I'm leaning towards it being good. A less charitable interpretation is that the length of the sequences produces some kind of hazing effect. Personally, I haven't finished the sequences, don't intend to, think they're needlessly verbose, and would like to see them compressed.)
I've recently been subject to sloppy criticism of "weird ideas" (e.g. transhumanism) and the sloppy criticism is always the same. At this point I'd look forward to high-quality criticism, but I'm not willing to suffer again and again through the sloppy parts for it.
If people want to provide high-quality criticism, they should be rewarded for it (in this case, with upvotes and polite conversation). Sloppy criticism remains low-quality content and should not be rewarded.
Makes sense. I still think the bar should be a bit lower for criticism, for a couple reasons.
Motivated reasoning means that we'll look harder for flaws in a critical piece, all else equal. So our estimation of post quality is biased.
Good disagreement is more valuable than good agreement, because it's more likely to cause valuable updates. But the person writing a post can only give a rough estimate of its quality before posting it. (Dunning-Kruger effect, unknown unknowns, etc.) Intuitively their subconscious will make some kind of "expected social reward" calculation that looks like
Because of human tendencies,
social_punishment_for_sloppy_criticismis going to be higher than the correspondingsocial_punishment_for_sloppy_agreementparameter in the corresponding equation for agreement.If
social_punishment_for_sloppy_criticismis decreased, on, the margin, that will increase the expected values of this calculation, which means that more quality criticism will get through and be posted. LW users will infer these penalties by observing voting behavior on the posts they see, so it makes sense to go a bit easy on sloppy critical posts from a counterfactual perspective. Different users will interpret social reward/punishment differently, with some much more risk-averse than others. My guess is that the most common mechanism by which low expected social reward will manifest itself is procrastination on writing the post... I wouldn't be surprised if there are a number of high-quality critical pieces of LW that haven't been written yet because their writer is procrastinating due to an ugh field around possible rejection.(I know intelligent people will disagree with me on this, so I thought I'd make my reasoning a bit more formal/explicit to give them something to attack.)
The sanitised LW feedback survey results are here: https://docs.google.com/spreadsheet/ccc?key=0Aq1YuBYXaqWNdDhQQmQ3emNEOEc0MUFtRmd0bV9ZYUE&usp=sharing
I'll be writing up an analysis of results, but that takes time.
Locations that received feedback:
(3) Washington, DC
(1) No local meetup
(*) means the feedback is from someone who hasn't attended because it's too far away, so seeing the specific response is probably not very helpful. (**) means the group name is written in the public results, so you can just search for it to find your feedback.
There were 78 responses, and four of them listed two or more cities, so these sum to 82.
If you organize one of these groups, and haven't already done so, please get in touch so I can send your feedback to you! (Or if you'd rather not receive it, it would be helpful if you could let me know that as well, so that I don't spend time trying to track you down.) I haven't yet sent anyone their feedback, and don't promise that I'll do it super quickly, but it will happen.
This just struck me: people always credit WWII as being the thing that got the US out of the great depression. We've all seen the graph (like the one at the top of this paper) where standard of living drops precipitously during the great depression then more than recovers during WWII.
How in the world did that work? Why is it that suddenly pouring huge resources out of the country into a massive utility-sink that didn't exist until the start of the war rapidly brought up the standard of living? This makes no sense to me.
The only plausible explanation I can think up is that they somehow borrowed from the future using the necessities of war as justification. I feel like that would involve a dip in the growth rate after WWII - and there is one, but it just dips back down to the trend-line not below like I would expect if they genuinely borrowed enough from the future to offset such a large downturn as the great depression. The only other thing seems to be externalities.
However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It's either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).
Or I'm wrong, of course. So what is it?
(Bonus question: why didn't WWI see a similar boost in living standards?)
I assumed it was because it motivated people into becoming much more productive.
It looks like this has been an unpopular suggestion, but I wouldn't discount motivation completely. A lot of early 20th century economists thought centrally planned economies were a great idea, based on the evidence of how productive various centrally planned war economies had been. Presumably there's some explanation for why central planning works better (or doesn't fail as badly) with war economies compared with peacetime economies, and I've always suspected that people's motivation to help the country in wartime was probably one of the factors.
It didn't. This is the argument in image form, and you can find similar ones for employment (basically, when you conscript people, unemployment goes down. Shocking!). There are lots of libertarian articles on the subject--this might be an alright introduction--but the basic argument is that standards of living dropped (that's what happens when food is rationed and metal is used for tanks instead of cars or household appliances) but the government spending on bombs and soldiers made the GDP numbers go up, and then the post-war boost in standards of living was mostly due to deferred spending.
Note: as the article implies, the above viewpoint is not representative of mainstream economic consensus.
What tgb stated above was factually incorrect--WWII did not increase living standards. While most economists credit WWII with kickstarting GDP growth and cutting unemployment, I don't know anyone who would actually argue that living standards rose during WWII.
Krugman doesn't quiiiite come out and say it, but he sure seems to want the reader to infer that living standards rose: http://krugman.blogs.nytimes.com/2011/08/15/oh-what-a-lovely-war/ And in that article, he quotes and quote of Rick Perry's book saying that the recovery happened because of WW2 (due to forcing FDR to "unleash private enterprise", oddly).
So maybe no one actually makes that argument, but boy it's common for people (economists and politicians!) to imply it. (Look at the contortions Perry goes through to not have to refute it!) It's always nice to notice the confusion a cached thought should have made all along.
I think you're reading way too much into Krugman's argument. I don't read Krugman as trying to imply that living standards rose during WWII. He doesn't even mention living standards. When economists talk about ending a recession or ending a depression, they mean something technical. Krugman was just talking about increased production and lowered unemployment, etc.
Frankly it seems bizarre to me that anyone would believe that crashing consumer spending + mass shortages = better living standards. It is fair to say that people had a better attitude about their economic deprivation, since it had a patriotic purpose in serving the war effort.
One simple model which seems to fit the "WWII ending the depression" piece of data (and which might have some overlap with the truth) is that it's relatively difficult to put idle resources into use, and significantly easier to repurpose resources that have been in use for other uses.
During the depression, a bunch of people were unemployed, factories were not running, storefronts were empty, etc. According to this model, under those economic conditions there were significant barriers to taking those idle resources and putting them to productive use.
Then WWII came and forced the country to mobilize and put those resources to use (even if that use was just to make stuff which would be shipped off to Europe and the Pacific to be destroyed). Once the war was over, those resources which had been devoted to war could be repurposed (with relatively little friction) to uses with a much more positive effect on people's standard of living. So things became good according to meaningful metrics like living standards, not merely according to metrics like unemployment rate or total output which ignore the fact that building a tank to send to war isn't valuable in the same way as building a car for local consumers.
The glaring open question here is why there might be this asymmetry between putting idle resources to use and repurposing in-use resources. Which is closely related to the question of why recessions/depressions exist at all (as more than momentary blips): once a recession hits and bunch of people become unemployed (and other resources go idle), why doesn't the market immediately jump in to snap up those idle resources? This article gets into some of the attempts to answer those questions.
(Bonus answer: World War One did not happen during a depression, so mobilizing for war mostly involved repurposing resources which had served other uses in peacetime rather than bringing idle resources into use.)
I like that this explanation gives a good reason for why this kind of spending could only work to fix a depression or similar situation versus always inflating standards of living. Thanks.
Part of it is that deflation in the early 1930s meant that workers were overpaid relative to the value of goods they produced (wages being harder to cut than prices). That caused wasteful amounts of unemployment. WWII triggered inflation, and combined with wage controls caused wages to become low relative to goods, shifting the labor supply and demand to the opposite extreme.
The people who were employed pre-war presumably had their standard of living lowered in the war (after having it increased a good deal during the deflation).
I won't try to explain here why deflation and inflation happened when they did, or why wages are hard to cut (look for "sticky wages" for info about the latter).
I'm not sure how much it influenced the overall picture, but there was quite a brain drain to the US before and during WWII (mostly Jewish refugees) as well as after (Wernher von Braun and the like). Migrating away from the Nazi and Stalinist spheres of influence demonstrates intelligence, and the ability to enter the US despite the complex “national origins quota system” that went into effect in 1929 demonstrates persistence, affluence and/or marketable skills, so I estimate these immigrants gave a significant boost to the US economy.
Also: salt iodization in 1924. Possibly also widespread flour enrichment in the early 1940s due to both Army incentivization and the need for alternate nutrient sources during rationing.
The labor force of the 1930s was sapped by over-allocation in unproductive industries. Specifically, much of the labor share was occupied in the sitting around feeling depressed and wishing you had a job industry. Economic conditions improved as workers shifted out of that industry and into more productive ones, such as all of them.
ADB I'm not sure what your intended connotations are, but I'd guess I'd OC.
I'm surprised no one has explained this yet, but this is wrong according to standard economic theory as I understand it.
The point is WWII helped the economy because we were well under our production possibilities frontier during the depression. Peace-time mega projects would only be helpful under recessed/depressed conditions, and fortunately, we now can use monetary policy to produce similar effects.
Anyway, the argument you were making seems pretty common among people who don't follow economics debates, and in fact is one of the major policy recommendations of the oddball Lyndon LaRouche cult.
Do you know of a typical measure (or component) of living standard that would have been measured for the US across both the great depression and WW2? The standard story I have heard informally is that WWII efforts did actually increase standards of living. I'm not surprised to learn that that's false, but given the level of consensus in the group-think I've encountered, I'd be interested in seeing some hard numbers. Plus, I'm interested in seeing whether there was a drop in living standards.
A while ago I mentioned how I'd set up some regexes in my browser to alert me to certain suspicious words that might be indicative of weak points in arguments.
I still have this running. It didn't have the intended effect, but it is still slightly more useful than it is annoying. I keep on meaning to write a more sophisticated regex that can somehow distinguish the intended context of "rather" from unintended contexts. Natural language is annoying and irregular, etc., etc.
Just lately, I've been wondering if I could do this with more elaborate patterns of language. It's recently come to my attention that expressions of the form "in saying [X] (s)he is [Y]" is often indicative of sketchy value-judgement attribution. It's also very easy to capture with a regex. It's gone in the list.
So, my question: what patterns of language are (a) indicative of sloppy thinking, weak arguments, etc., and (b) reliably captured by a regex?
(In the back of my mind, I am imagining some sort of sanity-equivalent of a spelling and grammar check that you can apply to something you've just written, or something you're about to read. This is probably one of those projects I will start and then abandon, but for the time being it's fun to think about.)
I had the notion a while ago to try to write a linter to aid in tasks beyond code correctness by automatically detecting the desired features in a plethora of objects. Kudos on actually doing it and in a not hare-brained fashion.
As a former Natural Language Processing researcher, the technology definitely exists. Using general vocabulary combined with many (semi-manually generated) regexes to figure out argumentative or weaselly sentences with decent accuracy should be doable. It could improve over time if you input exemplar sentences you came across.
Do you have a recommendation for a good language-agnostic text / reference resource on NLP?
ETA: my own background is a professional programmer with a reasonable (undergrad) background in statistics. I've dabbled with machine learning (I'm in the process of developing this as a skill set) and messed around with python's nltk. I'd like a broader conceptual overview of NLP.
I'd recommend this book for a general overview : http://nlp.stanford.edu/fsnlp/
However, tasks like parsing are unnecessary for many tasks. A simple classifier on a sparse vector of word counts can be quite effective as a starting point in classifying sentence/document content.
"[...]may be the case[...]"
Sometimes this phrase is harmless, but sometimes it is part of an important enumeration of possible outcomes/counterarguments/whatever. If "the case" does not come with either a solid plan/argument or an explanation why it is unlikely or not important, then it is often there to make the author and/or the audience feel like all the bases have been covered. E.g.,
Apparently I don't forget ideas, they just move places in my consciousness.
In the first week of last september I mused about writing a handbook of rationality for myself akin to how the ancient Stoics wrote handbooks for themselves. Nothing came from it, I plain and simply forgot about it. Next week I mused about writing a book using LaTeX and git as the git model allows to have many parallel versions of the book and there needs to be no canon for it to work, as opposed to a wiki, though still allowing collaboration. Now there already is a book written with git and writing a document with git is not a new idea at all.
Thinking about parallel legal systems or organisation forms with the explicit goal of copying the viable parts reminded me of using git to write source code. Inded, there is no difference between writing down social rules and personal maxims with this principle so I came to the obvious conclusion only a couple of hours ago: Use git to write a handbook of rationality, encourage other people to fork it and to do their own edits, keeping the viable parts and rejecting the questionable stuff.
Actions speak louder than words though lack of knowledge and other commitments can be an impediment, so I made a repository with only just the hint of a structure. Please provide your content and your thoughts about this.
I think this is a good idea, and I'm curious to see how it goes. I'll be watching, and as I complete some of my other writing duties I think this has a good chance of becoming one.
Something else that might be interesting: this comment and the idea it's a response to in the OP.
Thank you for your comment. I would be very happy to see you work on this too.
At the moment I am sadly swamped but this will pass in a week or two.
Edit: Now that I actually took the time to read the comment, I dump these first thoughts. Yes, most of the advice won't apply to any single person but the idea is to ahve anyone edit their own version. What I expect to see is some kind of tome with the most useful (widely applicable or extremely effective) stuff in it and explanations of it too, and a shorter version everyone or their group creates for themselves.
Are there any math/stats/CS theory types out there who are interested in suggestions for new problems?
I am finding that my large scale lossless data compression work is generating some mathematical problems that I don't have time to solve in their full generality. I could write up the problem definition and post to LW if people are interested.
Try posting some problems in the open threads here. MathOverflow has also worked really well for me.
Sure, lay it on us. If nothing else, writing it up clearly should help you.
I buy a lot of berries, and I've heard conflicting opinions on the health risks of organic vs regular berries (and produce in general). My brief Google research seems to indicate that there's little additional risk if any from non-organic prodce, but if anyone knows more about the subject, I'd appreciate some evidence.
Without citation: minimal "organic" labeling standards often aren't a very high or impressive barrier to clear.
A video of Daniel Dennett giving an excellent talk on free will at the Santa Fe Institute: https://www.youtube.com/watch?v=wGPIzSe5cAU It largely follows the general Less Wrong consensus, but dives into how this construction is useful in the punishment and moral agent contexts more than I've seen developed here.
I am thinking of doing an article digesting a handful of research papers by some researcher or on some theme that would be of interest to less-wrongers. Any suggestions for what papers/theme, and any suggestions on how to write this mini-survey?
Regarding networks; is there a colloquially accepted term for when one has a ton of descriptive words (furry, bread sized, purrs when you pet them, claws, domesticated, hunts mice, etc) but you do not have the colloquially accepted term (cat) for the network? I have searched high and low and the most I have found is reverse defintion search, but no actual term.
"Not having a word for it"? Or in the technical vocabulary of linguistics, the concept is not "lexicalised".
Not quite what you're looking for I think, but if someone is having that problem they might have anomic aphasia.
I've heard "anomia" and "being able to talk all around the idea of an [X] but not the word [X] itself".
Sounds kind of like the Tip of the Tongue Effect
That's a particular subcase of it, when you know that there's a word for that concept and you've heard it but you can't remember it. But other times it's more like “there should be a word for this”.
However, that's distinct from what gmzamz asked about: occasions when "you do not have the colloquially accepted term" for something.
I've posted this before but I want to make it more clear that I want feedback.
I want to build a better formalization of naturalized induction than Solomonoff's, one designed to be usable by space-, time-, and rate-limited agents, and interactive computation was a necessary first step. AIXI is by no means an ideal inductive agent.
Had a look at your link, but couldn't make sense of it. Consider writing a proper summary upfront.
This seems an ambitious task. Can you start with something simpler?
Sorry, my writing can get kind of dense.
It doesn't quite strike me as ambitious; I see a lot of room for improvement. As for starting with something simpler, that's what this essay was.
If you want people to read what you write, learn to write in a readable way.
Looked at your write-up again... Still no summary of what it is about. Something along the lines of (total BS follows, sorry, I have no clue what you are writing about, since the essay is unreadable as is): "This essay outlines the issues with AIXI built on Solomonoff induction and suggests a number of improvements, such as extending algorithmic calculus with interactive calculus. This extension removes hidden infinities inherent in the existing AIXI models and allows <some benefit>."
I'm in the process of writing summaries. I replied as soon as I read your response.
You are pretty much the first person to give me feedback on this. I do not have an accurate representation as to how opaque this is at all.
How's that? Every few lines, I give a summary of each subsection. I even double-spaced it, in case that was bothering you.
Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive. If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.
On the other hand, means-tested benefits have disadvantages too. It is administratively costly. Receiving them is seen as shameful in many countries. Most importantly, it is hard to create a means-tested system that doesn't create perverse incentives for those on benefits, since when you start working, you will both lose your benefits and start paying taxes under such a system. That may mean that the net income can be a very small proportion of the gross income for certain groups, incentivizing them to stay unemployed.
One middle route I've been toying with is that the government could provide people with cheap goods and services. People who were satisfied with them could settle for them, whereas those who wanted something more fancy would have to pay out of their own pockets. The government would thus provide people with no-frills food - Soylent, perhaps - no-frills housing, etc, for free or for highly subsidized prices (it is important that they produce enough and/or set the prices so that demand doesn't outstrip supply, since otherwise you get queues - a perennial problem of subsidized goods and services).
Of course some well-off people might choose to consume these subsidized goods and services, and some poor people might not choose to do that. Still, it should in general be very redistributionary. The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn't create perverse incentives.
Another advantage with this system is that it could perhaps rein in rampant consumerism somewhat. Parts of the population will be habituated to smaller apartments and less fancy food. Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd.
I guess this system to some extent exist - e.g. in many countries, the government does provide you with education and health care, but rich people opt to go for private health-care and private education. So the idea isn't novel - my suggestion is just to take it a bit further.
If a government produces goods, the results tend to be low quality (education may be an exception in some places).
The cost of a guaranteed minimum income may not be quite as high as you think-- it would replace a lot of more complicated government support. Also, it might be possible to build in some social rewards for not taking it if you don't need it.
The government wouldn't have to produce the low-standard/cheap goods and services. They could be produced by private companies. My point is just that the government would subsidize them (possibly to the point where they become free).
A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic (long story, reply if you need it). What does seem to be for various countries is different rates of value-added tax depending on the good or service - the greater the difference in taxation, the closer you get to the system you've described, but it is more gradual and can be fine-tuned. Maybe that could work for sales tax, too?
Nor did it in other Soviet block countries, e.g. People's Republic of Poland.
I'd be interested in hearing about this.
Start by googling
"hard currency shop".I'm no economist, but as a former citizen of that former country, this is what I could see.
There was a divide of basic goods and services and luxury ones. Basic ones would get subsidies and be sold pretty much at cost, luxury ones would get taxed extra to finance those subsidies.
The (practically entirely state-owned) industries that provided the basic type of goods and services were making very little profit and had no real incentive to improve their products, except to produce them cheaper and more numerously. Nobody was doing comparison shopping on those, after all. (Products from imperalist countries were expected to be better in every way, but that would often be explained away by capitalist exploitation, not seen as evidence homemade ones could be better.) So for example, the country's standard (and almost only) car did not see significant improvements for decades, although the manufacturer had many ideas for new models. The old model had been defined as sufficient, so to improve it was considered wasteful and all such plans were rejected by the economy planners.
The basic goods were of course popular, and due to their low price, demand was frequently not met. People would chance upon a shop that happened to have gotten a shipment of something rare and stand in line for hours to buy as much of that thing as they would be permitted to buy, to trade later. In the case of the (Trabant) car, you could register to buy one at a seriously discounted price if you went via an ever-growing waiting list that, near the end, might have you wait for more than 15 years. Of course many who got a car this way sold it afterwards, and pocketed a premium the buyer paid for not waiting.
Arguably more importantly, money was a lot better at getting you basic goods than luxury ones. So people tended to use money mostly for basic goods and services, and would naturally compare a luxury buy's value with those. When you can buy a (luxury) color TV at ten times the price of a (basic) black-and-white TV, it feels like you'd pay nine basic TVs for adding color to the one you use. Empirically, people often simply saved their money and thus kept it out of circulation.
Housing was a mess, too. Any rent was decreed to have to be very small. So there was no profit in renting out apartments, which again created a shortage of supply. (Private landownership was considered bourgeouis and thus not subsidized.) It got so bad many young couples decided to have child as early as possible, because that'd help them in the application to receive a flat of their own, and move out from their parents. And of course most buildings fell into disrepair - after all, there was no incentive to invest in providing higher quality for renters. This demonstrates again that to be making a basic good or service meant you'd always have demand, but that demand wouldn't benefit you much.
The production of luxury goods went better, partly because these were often exported for hard currency. The GDR had some industries that were fairly skilled at stealing capitalist innovations and producing products that had them, for sale at fairly competitive prices. Artificially low prices and subsidies for certain goods and products made pretty sure most of domestic consumption never benefitted from that skill.
In 2002, total U.S. social welfare expenditure constitutes over 35% of GDP
I think that would be too high anyway. Since anyone who bothers to work can make more than that, and the reduction in labor supply would increase pay, and any money you save will last you longer, there's little reason to make it enough for people to be well off, as opposed to getting just enough to scrape by.
It's also worth noting that most people will get a significant portion of that money back. If you make below the mean income (which most people do, since it's positively skewed) you will end up getting all of it back.
It seems unfair to charge people the entire price to get slightly better goods. Thus, if you want to get slightly better goods, the government should still reimburse you for the price of the cheap goods. At this point, it's just unconditional basic income with the government selling cheap goods.
As a minor point, Soylent as it is now can't be considered no-frills food. If you buy it ready-made, it costs around $10 a day.
What you do then is in effect (if I understand you correctly) to give them a "food voucher" (and similarly a "housing voucher", etc) worth a certain amount which they would be able to spend as they saw fit (but only on food/housing, what-not). Such as a system doesn't seem very clever (as you imply): in that case, it would be better to just give people money in the form of an unconditional basic income.
I'm not sure why it would be so unfair not to reimburse people who want more expensive goods, though. Of course, the government does to a certain extent discriminate in favour of those with more frugal preferences in this set-up. But one of my points is precisely that we want people to develop more frugal tastes - to spend less on, e.g. housing and food. There is a "conspicious consumption" arms race going on concerning these and many other goods which this system is intended to mitigate to some point.
Different people have different needs. Some people would be happy in cheap housing and others wouldn't - maybe they're more sensitive to sounds, environmental conditions or whatever else is the difference is between cheap housing and more expensive housing.
The point is, there's no basic standard that would satisfy everyone (unless that's a reasonably high standard, which isn't what is proposed here). Some people would consider more expensive goods and services NEEDS rather than luxuries, and for good reason - consuming cheaper alternatives might not kill them, but it would make them depressed, less healthy and less productive (for example)
So it is unfair to subsidize certain goods and services and not others - one might wonder "why is my neighbor getting her needs met for cheap, while I have to pay full price to meet my needs?"
If it costs $1.00 to make the basic food, and $1.10 to make slightly better food, and someone is willing to pay the difference, shouldn't they get the slightly better food?
Maybe it's not a big deal that nobody will eat anything that costs between $1.00 and $2.00. That's not a lot of deadweight cost. It's only around a dollar a person. But this will apply to everything you're paying for, which we have established is significant. If it costs $300 a month for cheap housing, and you virtually eliminate any housing that costs less than $600 a month, that is a lot of deadweight cost.
"Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd" - maybe I've misunderstood this, but surely it would have the opposite result? Let's say rents are ~$20/sqm (adjust for your own city; the principle stays the same). If I want my apartment to be 50 sqm rather than 40 sqm, that's an extra $200. But if 40 sqm apartments were free, the price difference would be the full $1000/month price of the bigger apartment. You've still got a cliff, just like in the means-tested welfare case; it's just that now it's on the consumption side.
In practice this would probably destroy the market for mid-priced goods - who wants to pay $1000/month just for an extra 10 square meters? Non-subsidized goods will only start being attractive when they get much better than the stuff the government provides, not just slightly better.
Also, if you give out goods rather than money, you're going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can't work (elderly, disabled etc) won't have access to. And if you do that, the efficiency of your economy is going to go way down - not just because the government is generally less efficient than the free market, but also because people can't use money to allocate resources according to their own preferences.
Yes, that's what it's like (only the cliff is actually usually less steep under means-tested welfare). And you're also right about this:
To clarify, I should say that my idea was that these subsidized or free goods and services would be so frugal that they would in effect not be an option to the majority of the population. Hence, it's not exactly the market for mid-priced goods, but the market for "low-priced but not extremely low-priced goods" that would get destroyed.
To your main point: since some people go down in standard, thanks to the fact that they by doing so they can get significantly cheaper goods, the average standard will go down. Now say that to get the average standard before this reform you had to pay 1000 dollars a month, but after the reform you just have to pay 900 dollars a month (because the average standard is now lower). Then those who want higher than the average standard will only have to pay more than 900 dollars rather than more than 1000.
The actual story might be more complicated than this - e.g., what some people really might be interested in is having a higher standard than the mean, or the the eight first deciles, or what-not. But generally it seems to me intuitive that if parts of the population lower their standards, then this should mean that those who want to consume consipiciously will also lower their standards.
I don't see this as a comprehensive system: rather, you would just use it for some important goods and services: food, housing, education, health, public transport (in fact, the system is already used in the three latter; possibly housing too, though most subsidized housing is means-tested which it wouldn't be under this system). The system would be too complicated otherwise. Possibly it could be combined with a low UBI.
The universal basic income schemes that seem the most reasonable to me adjust the taxation so that, while the UBI itself is never taxed, if you make a lot of money then your non-UBI earnings get an extra tax so that the whole reform ends up having very little direct effect on you. In effect, that ends up covering the "only used by a part of the population" criteria. The perverse incentives can't be avoided entirely, but they can be mitigated somewhat if the tax system is set up so that you're always better off working than not working.
For a concrete example, there's e.g. this 2007 proposal by the Finnish Green party. Your working wage (in euros per month) is on the X-axis, your total income after is on the Y-axis. Light green is the basic income, dark green is your after-tax wage, red is paid in tax. According to their calculations, this scheme would have been approximately cost neutral (compared to what the Finnish state normally gets in tax income and pays out in welfare).
Thanks, that's interesting. 440 euro is not a lot, though - could you live in Helsinki on that (in 2007)? Is this supposed to replace for instance unemployment benefits (which I'm sure are much higher)? It so, this system would make some people who aren't that well off worse off.
One thing that is seldom noted is that the Scandinavian "general welfare states" are in effect half-way to the UBI. In Sweden, and I would guess the other Scandinavian countries as well, everyone gets a significant pension no-matter what, child benefits are not means-tested, etc. Also virtually everyone uses public schools, public health-care, public universities and public child-care (all of which are either heavily subsidized or free). So it's not a question of either you have an Anglo-saxon system where benefits mostly go to the poor or a UBI system, but there are other options.
440 euros is almost the same amount as direct student benefits were in 2007, though that's not taking into account the fact that most students also have access to subsidized housing which helps substantially. On the other hand, the proposed UBI model would have maintained as separate systems the current Finnish system of "housing benefits" (which pays a part of your rent if you're low-income, exact amount depending on the city so as to take into account varying price levels around the country) as well as "income support", which is supposed to be a last-resort aid that pays for your expenses if you can show that you have reasonable needs that you just can't meet in any other way. So we might be able to say that in total, the effective total support paid to someone on basic income would have been roughly comparable to that paid to a student in 2007.
Some students manage to live on that whereas some need to take part-time jobs to supplement it, which seems to be roughly the right level to aim for - doable if you're really frugal about your expenses, but low enough that it will still encourage you to find work regardless. Might need to increase child benefits a bit in order to ensure that it's doable even if you're having a family, though.
The Greens' proposed UBI would have replaced "all welfare's minimum benefits", so other benefits that currently pay out about the same amount. That would include student benefits and the very lowest level of unemployment benefit (which you AFAIK get if your former job paid you hardly anything, basically), but it wouldn't replace e.g. higher levels of unemployment benefits.
Thanks, that's interesting and comprehensive.
Housing benefits are an alternative to the idea discussed here; i.e. subsidizing particular low-cost, low-standard flats. However, the problem with housing benefits is that you tend to get more of them if you have higher rent, and thus you in effect reward people with more expensive tastes, which leads to a general increase of housing consumption. My proposal is intended to have the exact opposite consequence.
I'm not that adverse to the UBI but there is something counter-intuitive about the idea that rich people first pay taxes and then get benefits back. This forces you to either lower the level of basic income (or other government expenditure) or raise taxes. My suggestion is intended to take care of this without having to resort to means-testing.
This is a popular practice in the third world.
See e.g. this or this.
how is this better than Walmart and Mcdonalds?
Is there a way to get email notifications on receiving new messages or comments? I've looked under preferences, and I can't find that option.
I just realized you can model low time preference as a high degree of cooperation between instances of yourself across time, so that earlier instances of you sacrifice themselves to give later instances a higher payoff. By contrast, a high time preference consists of instances of you each trying to do whatever benefits them most at the time, later instances be damned.
That makes sense. Even cooperating across short time frames might be problematic - "I'll stay in bed for 10 more minutes, even if it means that me-in-10-minutes will be stressed out and might be late for work"
I prefer to see long-term thinking as increased integration among different time-selves rather than a sacrifice, though - it's not a sacrifice to take actions with a delayed payoff if your utility function puts a high weight on your future-selves' wellbeing.
See https://www.google.com/search?q=picoeconomics
Tetlock thinks improved political forecasting is good. I haven't read his whole book but maybe someone can help me cheat. Why is improved forecasting not zero sum? suppose the USA and Russia can both forecast better but have different interests. so what?
[Edit] my guess might be that on areas of common interest like economics, improved forecasting is good. But on foreign policy...?
I don't think Tetlock talks about that much.
Imagine a better forecast about whether invading Iraq reduces terrorism, or about whether Saddam would survive the invasion. Wouldn't both sides make wiser decisions?
so that's a good thought. I think you're saying that nations aren't coolly calculating rational actors but groups where foreign policy is often based on false claims.
I guess it really depends on where forecasting is deployed. It will increase the power of whoever has access. If accessible to George Bush, then George is more powerful. If accessible to the public, the public is. So my question depends (at least partly) on the kind of forecasting and who controls the resulting info
also this paper seems relevant
Improved forecasting might mean that both sides do fewer stupid (negative sum) things.
For the simple reason that politics is not zero-sum, foreign policy included.
cooperation is not zero sum. Why does better forecasting lead to more cooperation?
I would guess that it does--but if somebody hasn't seriously addressed this then I don't think I'm doing foreign policy questions on GJP Season 4
Zero-sum means any actions of the participants do not change the total. Either up or down.
A nuclear exchange between US and Russia would not be zero-sum, to give an example. Better forecasting might reduce its chance by lessening the opportunities for misunderstanding, e.g. when one side mistakenly thinks the other side is bluffing.
As to more cooperation, better forecasting implies better understanding of the other side which implies less uncertainty about consequences which implies more trust which implies more cooperation.
How about the governments of the US and Russia correctly forecast that more hostility means more profits for their cronies, and increase military spending?
Yes, and..?
If you want something that comes with ironclad guarantees that it leads to only goodness and light, go talk to Jesus. That's his domain.
That would still not be zero-sum. Which direction you think it is depends on your views.
A greatly simplified example: two countries are having a dispute and the tension keeps rising. They both believe that they can win against the other in a war, meaning neither side is willing to back down in the face of military threats. Improved forecasting would indicate who would be the likely winner in such a conflict, and thus the weaker side will preemptively back down.
International politics is zero-sum once you've already reached the Pareto frontier and can only move along it, but if forecasting is sufficiently bad you might not even be close to the Pareto frontier.
Right. A lot of politics is not zero-sum. Reduced uncertainty and better information may enable compromises that before had seemed too risky. Forecasting could help identify which compromises would work and which wouldn't. Etc.
What's the best way to learn programming from a fundamentals-first perspective? I've taken / am taking a few introductory programming courses, but I keep feeling like I've got all sorts of gaps in my understanding of what's going on. The professors keep throwing out new ideas and functions and tools and terms without thoroughly explaining how and why it works like that. If someone has a question the approach is often, "so google it or look in the help file". But my preferred learning style is to go back to the basics and carefully work my way up so that I thoroughly understand what's going on at each step along the way.
This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it 'click' for how things work behind the scenes. It doesn't have to be much and you'll probably never use it again, but the concepts will help your broader understanding.
If you can be more specific about which parts baffle you, I might be able to recommend something more useful.
Nothing in particular baffles me. I can get through the material pretty fine. It's just that I prefer starting from a solid and thorough grasp of all the fundamentals and working on up from there, rather than jumping head-first into the middle of a subject and then working backwards to fill in any gaps as needed. I also prefer understanding why things work rather than just knowing that they do.
Which fundamentals do you have in mind? There are multiple levels of "fundamentals" and they fork, too.
For example, the "physical execution" fork will lead you to delving into assembly language and basic operations that processors perform. But the "computer science" fork will lead you into a very different direction, maybe to LISP's lambdas and ultimately to things like the Turing machine.
Whatever fundamentals are necessary to understand the things that I'm likely to come across while programming (I'm hoping to go into data science, if that makes a difference). I don't know enough to know which particular fundamentals are needed for this, so I guess that's actually part of the question.
Well, if you'll be going into data science, it's unlikely that you will care greatly about the particulars of the underlying hardware. This means the computer-science branch is more useful to you than the physical-execution one.
I am still not sure what kind of fundamentals do you want. The issue is that the lowest abstraction level is trivially simple: you have memory which can store and retrieve values (numbers, basically), and you have a processing unit which understands sequences of instructions about doing logical and mathematical operations on those values. That's it.
The interesting parts, and the ones from which understanding comes (IMHO) are somewhat higher in the abstraction hierarchy. They are often referred to as programming language paradigms.
The major paradigms are imperative (Fortran, C, Perl, etc.), functional (LISP), logical (Prolog), and object-oriented (Smalltalk, Ruby).
They are noticeably different in that writing non-trivial code in different paradigms requires you to... rearrange your mind in particular ways. The experience is often described as a *click*, an "oh, now it all makes sense" moment.
I guess a good starting point might be: Where do I go to learn about each of the different paradigms? Again, I'd like to know the theory as well as the practice.
Google is your friend. You can start e.g. here or here.
Could you give a couple examples of specific things that you'd like to understand?
Without that, a classic that might match what you're interested in is Structure and Interpretation of Computer Programs. It starts as an introduction to general programming concepts and ends as an introduction to writing interpreters.
I've been having a bit of a hard time coming up with specifics, because it's more a general sense that I'm lacking a lot of the basics. Like the professor will say something and it'll obliquely reference a concept that he seems to expect I'm familiar with, but I have no idea what he's referring to. So then I look it up on Wikipedia and the article mentions 10 other basic-sounding concepts that I've never heard of either. Or for example when the programming assignment uses a function that I don't know how to use yet. So I do the obvious thing of googling for it or looking it up in the documentation. But the documentation is referencing numerous concepts that I have only a vague idea of what they mean, so that I often only get a hazy notion of what the function does.
After I made my original post I looked around for a while on sites like Quora. I also took a look at this reddit list. The general sense I got was that to learn programming properly you should go for a thorough computer science curriculum. Do you agree?
The suggestion was to look up university CS degree curricula and then look around for equivalent MOOCs / books / etc. to learn it on my own. So I looked up the curricula. But most of the universities I looked at said to start out with an introductory programming language course, which is what I was doing before anyway. I've taken intro courses in Python and R, and I ran into the problems I mentioned above. The MITx Python course that I took was better on this score, but still not as good as I would have hoped. There are loads of resources out there for learning either of those languages, but I don't know how to find which ones fit my learning style. Maybe I should just try out each until I find one that works for me?
The book you mentioned kept coming up as well. That book was created for MIT's Intro to CS course, but MIT itself has since replaced the original course with the Python course that I took (I took the course on edX, so probably it's a little dumbed-down, but my sense was that it's pretty similar to the regular course at MIT). On the other hand, looking at the book's table of contents it looks like the book covers several topics not covered in the class.
There were also several alternative books mentioned:
Any thoughts on which is the best choice to start off with?
[Link] why do people persist in believing things that just aren't true
The square brackets are greedy. What you want to do is this:
which looks like:
[Link]: Why do people persist in believing things that just aren't true?
Suppose you have the option that with every purchase you make, you can divert a percentage (including 0 and 100) of the money to a GiveWell endorsed charity that you're not personally affiliated with. Meaning, you still pay the same price, but the seller gets less/none, and the rest goes to charity. Seller has no right to complain. To what extent would you use this? Would it be different for different products, or sellers? Do you have any specific examples of where you would or wouldn't use it?
Also, assume you can start a company, and that the same thing applies to all purchases the company makes, would you do it? Any specific business?
Why would there be any sellers under this system?
It is just a thought experiment, not something that could realistically exist. Suppose the president/king/whoever gave you (and only you) this power, and while the sellers are furious, they can't do anything about it. They are not participating by choice.
This seems consequentially equivalent to "legal issues aside, is it ethical to steal from businesses in order to give to [EA-approved] charity, and if so, which ones?".
I suspect answering would shed more heat than light.
For fun, let's reshuffle accents. So, every time you make a contribution to an EA-approved charity, you can go and pick yourself a free gift of equal value from any seller, and the seller can't do anything about that including complain. Is that OK? :-)
Great example. It is an isomorphic situation, that paints it in a completely different light.
If you are asking me personally, I can see myself doing just that in some cases, though definitely not as a standard way of obtaining goods. The reason for the original question was to see what the rest of you think of the matter.
I don't recall any past controversy offhand, but given that business in general and many specific categories of business in particular are highly politicized, I suspect the answers you'd get would be more revealing of your respondents' politics (read: boring) than of the underlying ethics. For the same reason I'd expect it to be more contentious than average once we start getting into details.
There are also PR issues with thought experiments that could be construed as advocating crime, although that's more an issue with my reframing than with your original question. There's no actual policy, though; there is policy against advocating violence, but this doesn't qualify.
It does happen to an extent.
You can buy a movie or you can pirate it and donate the price of the move.
That was actually the original topic of a conversation that inspired this question.
I see no reason to send my money anywhere other than to the most needy person. I'd divert 100%.
Well meaning, rationalized theft is still an assault on the seller.
I notice that I have a hard time getting myself to make decisions when there are tradeoffs to be made. I think this is because it's really emotionally painful for me to face actually choosing to accept one or another of the flaws. When I face making such a decision, often, the "next thing I know" I'm procrastinating or working on other things, but specifically I'm avoiding thinking about making the decision. Sometimes I do this when, objectively, I'd probably be better off rolling a dice and getting on with one of the choices, but I can't get myself to do that either. If it's relevant, I'm bad at planning generally. Any suggestions?
If you're not familiar with the ideas read "The Paradox of Choice" by Barry Schwartz or watch a talk about it.
Other ideas:
give yourself a very short deadline for most decisions (most decisions are trivial); i.e. I will make this decision in the next two minutes and then I will stick with it. For long-term life decisions, maybe not so much.
Flip a coin. This is a good way to expose your gut feelings. A pros and cons type of weighting the options allows you to weigh lots of factors. Flipping a coin produces fewer reactions (in my experience): "Shoot, I really wish i had the other option (good information), or "I don't feel to strongly about the outcome (good information), or "I'm content with this flip" (good information).
Spend some time deciding if decisiveness is a virtue. Dwell on it until you've convinced yourself that decisiveness is good, and have come to terms that you are not decisive. Around here it may be tempting to label decisiveness as rash and to rationalize your behavior, or not worth the work of changing, if so return to step one and reaffirm that you think it is good to be decisive. Now step outside your comfort zone and practice being decisive, practice at the restaurant, at work, doing chores. Have reminders to practice, set your desktop or phone background to "Be Decisive" in plain text (or whatever suits your esthetic tastes). Pick a role model who takes decisive action. Now after following these steps, you have practiced making decisions and following through on them, you have decided that to make a choice and not dwelling on it is a virtue, Now you can update your image of yourself as a decisive person. From there it should be self sustaining.
Bad news, guys - we're probably all charismatic psychotics; from "The Breivik case and what psychiatrists can learn from it", Melle 2013:
It's a good thing Breivik didn't bring up cryonics.
Second Livestock
I feel there are many possible Lesswrong punchlines in response to this.
Scott Aaronson isn't convinced by Giulio Tononi's integrated information theory for consciousness.
Where is somewhere to go for decent discussion on the internet? I'm tired of how intellectually mediocre reddit is, but this place is kind of dead.
Also looking for LW replacement, with no current success.
This question occasionally comes up on #lesswrong, too, especially given the perceived decline in the quality of LW discussions in the last year or so. There are various stackoverflow-based sites for quality discussions of very specific topics, but I am not aware of anything more general. Various subreddits unfortunately tend to be swarmed by inanity.
So LW but bigger? I think you are out of luck there.
Check out metafilter.
Its survival is in doubt. In particular, "The site is currently and has been for several months operating at a significant loss. If nothing were to change, MeFi would defaulting on bills and hitting bankruptcy by mid-summer."
Slate Star Codex comments have smart people and a significant overlap with LW, but the interface isn't great (comment threading stops after it gets to a certain level of depth, etc). Alternatively, it may help to be more selective on reddit - no default subreddits, for example.