What is the point of having separated Open Threads and Stupid Questions threads, instead of allowing "stupid questions" in OTs and making OTs more frequent?
The Joy of Bias
What do you feel when you discover that your reasoning is flawed? when you find your recurring mistakes? when you find that you have been doing something wrong for quite a long time?
Many people feel bad. For example, here is a quote from a recent article on LessWrong:
By depicting the self as always flawed, and portraying the aspiring rationalist's job as seeking to find the flaws, the virtue of perfectionism is framed negatively, and is bound to result in negative reinforcement. Finding a flaw feels bad, and in many people that creates ugh fields around actually doing that search, as reported by participants at the Meetup.
But actually, when you find a serious flaw of yours, you should usually jump for joy. Here's why.
which often costs lesser by an order of magnitude.
At equilibrium, the price equals the marginal cost; sure, it is more than the average cost, but I can't see why the latter is relevant.
And the effort required to earn the money to buy the ring is also wasted.
And the effort required to earn the money to buy the ring is also wasted.
No, it's not. You have produced (hopefully) valuable goods or services; why they are wasted, from the viewpoint of society?
Maybe not for that reason. But the opportunity cost of having kids, for example in terms of time and money, is pretty high. You could easily make an argument that those resources would be more effectively used for higher impact activities.
The money as dead children analogy might be particularly useful here, since we're comparing kids with kids.
Such cost calculations are wildly overestimated.
Suppose you buy a luxury item, like a golden ring with brilliants. You pay a lot of money, but your money isn't going to disappear; it is redistributed between traders, jewelers, miners, etc. The only thing that's lost is the total effort required to produce that ring, which often costs lesser by an order of magnitude. And if the item you buy is actually useful, the wasted effort is even lower.
The cost of having kids is so high for you, because you will likely raise well-educated children with high intelligence, which are valuable assets to our society; likely being net positive, after all. Needless to say, actually ensuring that these poor children in Africa will end up that well, rather than, say, die of starvation the next year, is going to cost you much more than 800$. So you pay for quality here.
Is it unethical to have children pre-Singularity, for the risk of them dying?
Well, everyone will likely die sooner or later, even post-Singularity (provided that it will happen, which isn't quite a solid fact).
Anyway, I think that any morality system that proclaims unethical all and every birth happened so far is inadequate.
Yes, this this this this this this this. "The capacity of human minds is limited and I'll accept climbing up higher in abstraction levels at the price of forgetting how the lower-level gears turn." If I could upvote this multiple times, I would.
This is the crux of this entire approach. Learn the higher level, applied abstractions. And learn the very basic fundamentals. Forget learning how the lower-level gears turn: just learn the fundamental laws of physics. If you ever need to figure out a lower-level gear, you can just derive it from your knowledge of the fundamentals, combined with your big-picture knowledge of how that gear fits into the overall system.
That only works if there are few levels of abstraction; I doubt that you can derive how do programs work at the machine codes level based of your knowledge of physics and high-level programming. Sometimes, gears are so small that you can't even see them on your top level big picture, and sometimes just climbing up one level of abstraction takes enormous effort if you don't know in advance how to do it.
I think that you should understand, at least once, how the system works on each level and refresh/deepen that knowledge when you need it.
On the other hand, if you don't have a solid grasp of linear algebra, your ability to do most types of machine learning is seriously impaired. You can learn techniques like e.g. matrix inversions as needed to implement the algorithms you're learning, but if you don't understand how those techniques work in their original context, they become very hard to debug or optimize. Similarly for e.g. cryptography and basic information theory.
That's probably more the exception than the rule, though; I sense that the point of most prerequisites in a traditional science curriculum is less to provide skills to build on and more to build habits of rigorous thinking.
Read what is a matrix, how to add, multiply and invert them, what is a determinant and what is an eigenvector and that's enough to get you started. There are many algorithms in ML where vectors/matrices are used mostly as a handy notation.
Yes, you will be unable to understand some parts of ML which substantially require linear algebra; yes, understanding ML without linear algebra is harder; yes, you need linear algebra for almost any kind of serious ML research -- but it doesn't mean that you have to spend a few years studying arcane math before you can open a ML textbook.
Can I give a counterexample? I think that way of learning things might help if you only need to apply the higher-level skills as you learned them, but if you need to develop or research those fields yourself, I've found you really do need the background.
As in, I have been bitten on the ass by my own choice not to double-major in mathematics in undergrad, thus resulting in my having to start climbing the towers of continuous probability and statistics/ML, abstract algebra, logic, real analysis, category theory, and topology in and after my MSc.
You're right; you have to learn solid background for research. But still, it often makes sense to learn in the reversed order.
Can you unpack "approximation of Solomonoff induction"? Approximation in what sense?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Judging from the recent decline of LW, it seems that the initial success of LW wasn't due to rationality, but rather due to Eliezer's great writing. If we want LW to become a fun place again, we should probably focus on writing skills instead of rationality skills. Not everyone can be as good as Eliezer or Yvain, but there's probably a lot of low hanging fruit. For example, we pretty much know what kind of fiction would appeal to an LWish audience (HPMOR, Worm, Homestuck...) and writing more of it seems like an easier task than writing fiction with mass-market appeal.
Does anyone else feel that it might be a promising direction for the community? Is there a more structured way to learn writing skills?
I have noticed that many people here want LW resurrection for the sake of LW resurrection.
But why do you want it in the first place?
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
After all, if you think that Eliezer's writing constitute most of LW value, and Eliezer doesn't write here anymore, maybe the wise decision is to let it decay.
Beware the lost purposes.