If the Bayesian Conspiracy ever happens, the underground area they meet in should be called the Bayesment.
There's an idea I've been kicking around lately, which is being into things.
Over the past couple of weeks I've been putting together a bug-out bag. This essentially involves the over-engineering of a general solution to an ambiguous set of problems that are unlikely to occur. On a strictly pragmatic basis, it is not worth as much of my time as I am spending to do this, but it is so much fun.
I'm deriving an extraordinary amount of recreational pleasure from doing more work than is necessary on this project, and that's fine. I acknowledge that up to a point I'm doing something useful and productive, and past that point I'm basically having fun.
I've noticed a failure mode in other similarly motivated projects and activities to not acknowledge this. I first noticed the parallel when thinking about Quantified Self, and how people who are into QS underestimate the obstacles and personal costs surrounding what they're doing because they gain a recreational surplus from doing it.
I suspect, especially among productivity-minded people, there's a desire to ringfence the amount of effort one wants to expend on a project, and justify all that effort as being absolutely necessary and virtuous and pragmatic. While I certainly don't think there's anything wrong with putting a bit of extra effort into a project because you enjoy it, awareness of one's motivations is certainly something we want to have here.
Does any of this ring true for anyone else?
possible Akrasia hack: Random reminders during the day to do specific or semi-specific things.
Personally I find myself able to get endlessly sucked into reading or the internet or watching shows very easily, neglecting simple and swift tasks simply because no moment occurs to me to do them. Using an iphone app I have reminders that happen at random times 4 times a day that say things like "Brief chores" or "exercise" that seem to have made it a lot easier to always have clean dishes/clothes or get some exercise in every day.
Akrasia-related but not yet on lesswrong. Perhaps someone will incorporate these in the next akrasia round-up:
1) Fogg model of behavior. Fogg's methods beat akrasia because he avoids dealing with motivation. Like "execute by default", you simply make a habit by tacking some very easy to perform task onto something you already do. Here is a slideshare that explains his "tiny habits" and an online, guided walkthrough course. When I took the course, I did the actions each day, and usually more than those actions. (IE every time I sat down, I plugged in my drawing tablet, which got me doing digital art basically automatically unless I could think of something much more important to do). For those who don't want to click through, here are example "tiny habits" which over time can become larger habits: "After I brush, I will floss one tooth." "After I start the dishwasher, I will read one sentence from a book.” “After I walk in my door from work, I will get out my workout clothes.” “After I sit down on the train, I will open my sketch notebook.” “After I put my head on the pillow, I will think of one good thing from my day.” “After I arrive ho...
To Really Learn, Quit Studying and Take a Test
Suppose that retrieval testing helps future retention more than concept diagrams or re-reading. I'll go further and suppose that it's the stress of trying to recall imperfectly remembered information (for grade, reward, competition, etc. - with some carrot-and-stick stuff going on) that really helps it take root. What conclusions might flow from that?
Coursera-style short quizzes on the 5 minutes of material just covered are useful to check understanding, but do next to nothing for retention.
Homework is useful, but the stress it creates may be only indirectly related to the material we want to retain: lots of homework is solved by meta-guessing, tinkering w/o understanding, etc. What kind of homework would be best to cause us to recall the material systematically under stress?
When watching a live or video lecture, it may be less useful to write detailed notes (in the hope that it'll help retention), and more useful to wait until the end of the lecture (or even a few hours/days more?) and then write a detailed summary in your own words, trying to make sure all salient points are covered, and explicitly testing yourself on that someh
Don't ever call them a cult (that is expensive). Don't edit their Wikipedia article (it will be quickly reverted). Don't sign anything (e.g. a promise to pay).
Bring some source of sugar (chocolate) and consume it regularly during the long lessons to restore your willpower and keep yourself alert.
Don't fall for "if this is true, then my life is going to be awesome, therefore it must be true" fallacy. Don't mistake fictional evidence for real evidence. (Whatever you hear during the seminar, no matter from whom, is a fictional evidence.)
After the seminar write down your specific expectations for the next month, two months, three months. Keep the records. At the end, evaluate how many expecations were fulfilled and how many have failed; and make no excuses.
Don't invite your friends during the seminar or within the first month. If you talk with them later, show them your specific documented evidence, not just the fictional evidence. (If you sell the hype to your friends, it will become a part of your identity and you will feel a need to defend it.)
Protect your silent voice of dissent during the seminar. If you hear something you disagree with, you are not in a position to voic...
My current anti-procrastination experiment: using trivial inconveniences for good. I have installed a very strong, permanent block on my laptop, and still allow myself to go on my favourite time wasters, but only on my tablet, which I carry with me as well.
The rationale is not to block all use and therefore be forced to mechanically learn workarounds, but to have a trivially inconvenient procrastination method always available. The interesting thing is that tablets are perfect for content consumption, so the separation works well. It also helps me to sep...
There's a chain of restaurants in London called Byron. Their comment cards invite your feedback with the phrase "I've been thinking..."
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I've actually started to value it as an outlet for whatever's been rattling around my head at the time.
Further to the discussion of SF/F vs "earthfic", I would love to see someone write a "rationalist" fanfic of the Magic School Bus (...Explores Rationality). Doesn't look like the original set of stories had any forays in cog sci.
I've done some analysis of correlations over the last 399 days between the local weather & my self-rated mood/productivity. Might be interesting.
Wasn't there a LWer who some years ago posted about a similar data set? I think he found no correlations, but I wouldn't swear to it. I tried looking for it but I couldn't seem to find it anywhere.
(Also, if anyone knows how to do a power simulation for an ordinal logistic regression, please help me out; I spent several days trying and failing.)
I just had a "how do you feel about me?" conversation via facebook. Some observations:
I've seen reasonably convincing evidence that alcohol can, in small doses increase lifespan, and act as a short term nootropic for certain types of thinking (particularly being "creative"). On the other hand, I've head lots of references to drinking potentially causing long term brain damage (wikipedia seems to back this up), but I think that's mostly for much heavier drinking then what had been doing based on the first two points (one glass of wine a day 4-6 times a week). Does anyone know of any solid meta-anylisis or summaries that would let me get a handle on the tradeoffs involved?
The AI Box experiment is an experiment to see if humans can be convinced to let out a potentially dangerous AGI through just a simple text terminal.
An assumption that is often made is that the AGI will need to convince the gatekeeper that it is friendly.
I want to question this assumption. What if the AGI decides that humanity needs to be destroyed, and furthermore manages to convince the gatekeeper of this? It seems to me that if the AGI reached this conclusion through a rational process, and the gatekeeper was also rational, then this would be an entirely...
So, I've done a couple of charity bike rides, and had a lot of fun doing them. I think this kind of event is nice because it's a social construct that ties together giving and exercise in a pretty effective way. So I'm wondering - would any others be interested in starting a LessWrong athletic event of some kind for charity?
I'm not suggesting that this is the most effective way to raise money for effective causes or get yourself to start exercising... but it might be pretty good (it is a good way to raise money from people who aren't otherwise interested i...
What are some effective interviewer techniques for a more efficient interview process?
A resume can tell you about the person's skill, experience, and implicitly, their intelligence. The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically "feel out" the person in a short amount of time. This is fine when searching for any obvious red-flags, but for somethings as important as collaborating with someone long-term and who you will likely see more of than your own family, we s...
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn't be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Does anyone know anything about, or have any web resources, for survey design? An organization I'm a member of is doing an internal survey of members to see how we can be more effective, and I've been tasked with designing the survey.
Total, abject failure. Mental illness. Sometimes leading to suicide. Having the most talented of their peer group switch to something they are less likely to waste their whole life on with nothing to show, and the next most talented switch to something else because they are frustrated with the incompetence of the people who remain. Turning into cranks with a 24/7 vanity google alert so that they can instantly show up to spam time cube esque nonsense whenever someone makes the mistake of mentioning them by name. Mail bombs from anarchoprimitivist math PhDs.
Maybe being rational in social situations is the same kind of faux pas as remaining sober at a drinking party.
It has occurred to me yesterday that maybe the typical human irrationality is some kind of a self-handicapping process which could still be a game-theoretical winning move in some situations... and that perhaps many rational people (certainly including me) are missing the social skill to recognize it and act optimally.
The idea came to me when thinking about some smart-but-irrational people who make big money selling some products to irrational peop...
I have recently been thinking about meta-game psychology in competitions, more specifically, knowledge of opponent's skill level and knowledge of opponent's knowledge of your own skill level, and how this all affects outcomes. In other words, instead of being 'psych out' by 'trash talk', is there any indication that you can be 'psyched out' by knowing how you rank up against other players. Any links for more information would be appreciated.
Part of my routine is to play a few games of on-line chess everyday. I noticed when ever an opponent with ...
Anybody know of any good alternatives in Utilitarian philosophy to "Revealed Preference"? (That is, is there -any- mechanism in Utilitarian philosophy by which utility actually, y'know, gets assigned to outcomes?)
Family Fortunes Pub Quiz
On a Sunday night I take part in a pub quiz. It's based on a UK quiz show called Family Fortunes, which in turn is based on the US show Family Feud. To win you must answer all 5 questions correctly, the correct answer is whatever was the most popular answer in a survey of 100 people.
I'm curious to see if LessWrong does better than me.
We asked 100 people...
I'm pretty much a novice at decision theory, although I'm competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the "you play prisoner's dilemma against a copy of yourself" example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you're capable of self-modifying, you're also capable of arbitrar...
Do we need a submission for Eliezer? :) http://www.quickmeme.com/Just-Want-To-Watch-The-World-Learn/?upcoming ("some men just want to watch the world learn" image macros)
Dilbert has been running FAI failure strips for the past two days - http://www.dilbert.com/2013-03-28/ http://www.dilbert.com/2013-03-29/ Of course, it only occurred because the robot was actively hacked to be disgruntled in an earlier strip... not exactly on point here. I'm watching to see where this goes.
In case this hasn't been posted recently or at all: if you want to calculate the number of upvotes and downvotes from the current comment/post karma and % positive seen by cursor hover, this is the formula:
# upvotes = karma*%positive/(2*%positive-100%)
#downvotes = #upvotes - karma
This only works for non-zero karma. Maybe someone wants to write a script and make a site or a browser extension where a comment link or a nick can be pasted for this calculation.
[To be deleted. Please excuse the noise.]
I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
I have recently read the dictator's handbook. In it the author suggests that democracies, companies and dictatorships are not essentially different and the respective leaders follow the same laws of rulership. As a measure of more democratic behavior in publicly traded companies they suggest a Facebook like app to discuss company policy. Does anyone know about a company or organization that does this? It seems almost to be too good to be true.
Is the xkcd rock-placing man in any danger if he creates a UFAI? Apparently not, since he is, to quote Hawking, the one who "breathes fire into the equations". Is creating an AGI of use to him? Probably, if he has questions it can answer for him (by assumption, he just knows the basic rules of stone-laying, not everything there is to know). Can there be a similar way to protect actual humans from a potential rogue AI?
What is with LW people and theorems? The situation you've described is nowhere near formalized enough for there to be anything reasonable to say about it at the level of precision and formality that warrants a word like "theorem."
As it's been queried how many physicists, mathematicians, etc. currently believe what about QM, I thought this paper (no paywall, Yay!) might interest a few of you: A Snapshot of Foundational Attitudes Toward Quantum Mechanics
For example, question 12: Copenhagen 42% Information 24% Everett 18%
...Here, we present the results of a poll carried out among 33 participants of a conference on the foundations of quantum mechanics. The participants completed a questionnaire containing 16 multiple-choice questions probing opinions on quantum-foundational is
Would it be inappropriate to host a Less Wrong March Madness bracket pool?
Edit: Not going to do it.
So it seems something a bit like the Mary's Room experiment has actually been done in mice. And appears to indicate the mice had different behaviour with a new colour receptor.
Another SMBC comic on the intelligence explosion. Don't forget the mouseover text of the red button.
have a few audible credits to use up before i cancel the service. any recommendations?
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual's best interest? Or does CEV just claim that it's a good compromise as being the closest we can get to satisfying everyone's desires?
If it's worth saying, but not worth its own post, even in Discussion, it goes here.