This is the very definition of the status quo bias.
I've made up words/phrases for 37 of them, and my notes-to-self are already peppered with them. Unfortunately I don't have the time right now to put them all here, but I probably will when my notes are more organized (like, when I've actually made up the words).
D'you think it'd be worth posting the 37 you've invented names for? My appetite's been whetted!
I'd be curious to see some of these as well!
I bookmarked it. I'll probably check it out soon.
I dare you to try it out in the next 6 hours, no excuses. :)
Learning about Neurobiology. I found the more I know about how the brain works, the more cognitive science makes sense.
People assume memories are stored in one region of the brain. From the inside, it feels like all this knowledge is obviously coming from one place. Factual information about an elephant (weight, where it lives, etc) is related the mental image of an elephant (gray skin, has big ears and a trunk,) but brains store that information in completely different places.
Have you tried using the LessWrong Study Hall? They do pomodoros (25 minutes of work with 5 minutes break or 50 minutes work with 10 minutes break). YMMV, but I found that it helped motivate me, when I would otherwise be unmotivated. The five or ten minutes between pomodoros is fun, and while in a pomodoro, you are working with other people, so you have that sense of solidarity.
Have you tried your hand at drawing?
It is not quite the same skill, but being able to notice/See things as they are (closer to raw visual input) rather than letting your brain auto-label stuff may help you retain images better, and I think it'd also be interesting if you were to take a written scene from a book and try do draw it.
By the way, there is supposedly a fast way (~20h) to go from kindergarten to recognizably realistic in drawing skills using some neat tricks, there was even a series of articles about it here on lw. (the other 10k hours go into those final touches of skill, but to the untrained eyes the difference isn't as jarring as the no-training - some-training gap, at least in simple scenes)
Drawing may improve visual memory (especially with things like drawing people's faces to help remember what they looked like), but I don't know if it will necessarily help someone develop a visual memory.
When I started using Pomodoros, I quickly got the sense that I had never before actually understood what it meant to focus. For example, I learned that I don't actually focus on the task at hand when I'm listening to music. When my "honeymoon period" ended, I had learned what focusing felt like, and learned to turn "focus" on and off without the need of the timer.
So it may just be that Pomodoros serve a transient purpose - they are a process you go through, not a tool you keep using. At least this is how it feels for me.
I can't focus with music on at all. I'm not sure if that's common or not. I know plenty of people who watch tv/listen to music while working, and they're fine.
I really enjoyed the first part of the post-- just thinking about the fact that my future goals will be different from my present ones is a useful idea. I found the bit of hagiography about E.Y. at the end weird and not really on topic. You might just use a one or two sentence example: He wanted to build an A.I., and then later he didn't want to.
Not exactly. The core idea remains the same, but the method in which he's getting there has, and the type of mind that he wants to create has changed.
Part of the problem is the many factors involved in the political issues. People explain things through their own specialty, but lack knowledge of other specialties.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I should post separately about this at some point.
Suppose we have a Collective Judgment of Science system in which scientific karma enters the system at highly agreed-upon points, e.g. very well-replicated, significant findings. Is there a system with the following properties:
The karma entry points need not necessarily be the most trusted people. Let's say you made a significant discovery, but 70% of the field disagrees with most of your opinions, and someone who hasn't made a significant discovery is trusted by 95% of the people who make significant discoveries. We should perhaps believe the latter person over you; making one discovery is not proof of perfect epistemic reliability.
If someone goes rogue and endorses a thousand trolls, who in turn endorse a million trolls, the million trolls can do no more karmic damage / produce no more karmic distortion, than the original person.
If I make three significant discoveries or write three good papers, there is no incentive to spread those papers out over 3 pseudonyms, or coauthor them with 3 others, in terms of how much influence I will have afterward. There may potentially be some incentive to centralize, although this would also not be good.
Downvoting or strongly downvoting an idea that many reliable epistemic voters think is correct may potentially be taken as Bayesian evidence by the system that you sometimes downvote good ideas. It's probably worth distinguishing this from concluding that you sometimes upvote bad ideas, without separate evidence.
Rather than give people an incentive to waste labor by systematically downvoting everything that person X said, there is a centralized "I think this person is a complete idiot" button. After pressing this button, further systematic downvoting has no effect. Obviously the order of operations should not be significant here, i.e., this button must have as much effect as downvoting everything. Perhaps you might be asked to look at the person's 3 highest-karma nodes and asked if you really want to downvote those too (vs. an "I hate most but not all things you say" rating) given that indicating "I uniformly hate everything you say" may then potentially reflect poorly on your reliability.
Within these constraints, it should be generally true that one person who's gotten a large karma prize cannot outvote 100 people who were all endorsed by trusted epistemics with karma originating from sources outweighing that single prize.
We're okay with this system using terabytes or even petabytes of memory to scale, so long as it's not exabytes and it can compute updates in real time, or at least less than an hour.
Being able to run on upvotes and downvotes is great, failing that having people click on a 5-star level or a linear spectrum is about as much info as we should ask, since most users will not provide more info than this on most occasions. We could potentially have a standard 5-star scale which by leaving the mouse present for 5 seconds can go to 6 stars, or a 7-star rating which can be given once per month, or something. We can't ask users to rate along 3 separate dimensions.
We should take into account that some people have pickier standards and downvote more easily or upvote more rarely than others, or conversely someone who endorses almost everything is only providing discriminatory Bayesian evidence about a threshold on the low end of the quality scale.
We can suppose that nodes are clustered in a 3-level hierarchy by broadest area, subject, and subspecialization but probably shouldn't suppose any more clustering in the data than this. It's possible we shouldn't try to assess it at all.
A consequence of this system is that as a philosopher, you can potentially achieve great endorsement of your perspicacity, but only by convincing people who were upvoted by people who delivered well-replicated significant experimental results. This strikes me as a feature, not a bug. I don't know of any particularly better way to decide which philosophers are reliable.
It can potentially be possible to bet karma on predictions subject to definite settlement a la a prediction market, since this can only operate to increase reliability of the system. If an open question that people opinionated about is definitely settled, anyone who was bold in predicting a minority correct answer should have their karma in some way benefit. Again we do not want an incentive to create pseudonyms to get independent karma awards here. (We can perhaps imagine such a question-node as a single source which endorses everyone who endorsed its correct answer.)
Presentation ordering of new nodes takes into account a value-of-information calculation, not just the highest confidence in current karma. (Obviously, under such a calculation, more prolifically voting users will see more recent nodes. This is also fine.)
Is it possible to get enough people interested in this to do something with it (like a website?)
It seems like it would take a herculean effort to get enough scientists interested and willing to participate. But then again, there may be many more scientists disillusioned with the academic journal system than I think.