If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Per a discussion on IRC, I am auctioning off my immortal soul to the highest bidder over the next week. (As an atheist I have no use for it, but it has a market value and so holding onto it is a foolish endowment effect.)
The current top bid is 1btc ($120) by John Wittle.
Details:
17twxmShN3p6rsAyYC6UsERfhT5XFs9fUG
(existing activity)I am really disappointed in you, gwern. Why would you use an English auction when you can use an incentive-compatible one (a second price auction, for example)? You're making it needlessly harder for bidders to come up with valuations!
(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)
(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)
Having read about auctions before, I am well-aware of the winner's curse and expect coordination to be hard on bidding for this unique item.
Bwa ha ha! Behold - the economics of the damned.
Sorry to ruin the fun but I'm afraid this sale is impossible. Gwern lacks the proprietary rights to his own soul. As the apostle St Paul writes in his letter to the Corinthians (chapter 6), "Or know you not, that your members are the temple of the Holy Ghost, who is in you, whom you have from God; and you are not your own? For you are bought with a great price. Glorify and bear God in your body." It clearly states that "you are not your own" which at least applies to baptized Christians (and as a confirmed Catholic, it may even apply to a higher degree). Unless gwern provides some scriptural basis for this sale, it cannot proceed. Even when Satan tempted Christ, the only proferred exchange was worship in return for temporal power. There are no cases (even hypothetical ones) of a direct sale of one's soul in the Church's Tradition.
In exchange for ruining this sale, I'll pray for your soul for free.
The end of 1 Corin 6:19 does not say "you are not your own"; it literally says "and [it] is not your own" (= καὶ οὐκ ἐστε ἑαυτῶν)
You are wrong about this - here's the inflection of the word: http://en.wiktionary.org/wiki/%CE%B5%E1%BC%B0%CE%BC%CE%AF#Ancient_Greek
"ἐστε" is second person plural ("you are") NOT third person singular ("it is").
you still benefit infinitely by bargaining him down to an agreement like torturing you every day via a process that converges on an indefinitely large but finite total sum of torture while still daily torturing you & fulfilling the requirements of being in Hell.
A tactic that almost definitely should be referred to as "Gabriel's Horn."
Note that if you can get a high price from Satan on your own soul (e.g. rulership of a country), this is a no-lose arbitrage deal since souls are fungible goods.
My soul is here defined as my supernatural non-material essence as specified by Judeo-Christian philosophers, and not my computational pattern (over which I continue to claim copyright); transfer does not cover any souls of gwerns in alternate branches of the multiverses inasmuch as they have not consented.
What? This is lame. The definition of the soul as used by 16th century Catholic theology, which is friendly to information theory, is clearly the common sense interpretation and assumed among reasonable people. Sure some moderns love the definition you use but they are mostly believers of moralistic therapeutic deism, one hardly needs more evidence of their lack of theological expertise.
I certify that my soul is intact and has not been employed in any dark rituals such as manufacturing horcruxes; I am also a member in good standing of the Catholic Church, having received confirmation etc. Note that my soul is almost certainly damned inasmuch as I am an apostate and/or an atheist, which I understand to be mortal sins.
Not sure how much I can trust the word of a damned. After all, lying is no more of a mortal sin than apostasy. And for an atheist there is no extra divine punishment for lying.
One person who did this years ago spun the event into a book, a popular blog, and endless speaking gigs.
That's an interesting comparison, but I'm selling my soul, and it looks like he was just selling his time:
Mehta, an atheist, once held an unusual auction on eBay: the highest bidder could send Mehta to a church of his or her choice. The winner, who paid $504, asked Mehta to attend numerous churches, and this book comprises Mehta's responses to 15 worshipping communities, including such prominent megachurches as Houston's Second Baptist, Ted Haggard's New Life Church in Colorado Springs, Colo., and Willow Creek in suburban Chicago.
If you can find evidence that they are correct, you could have a fraud claim. However, the contract defines the soul being sold as that described by the Judeo-Christian philosophers.
I am confused. This Washington Post article appears to describe a preliminary study which suggests that politics is less of a mindkiller if you ask people to bet money on their beliefs.
And I am confused because what appear to be my attempts to find the paper resulted in two papers with entirely different abstracts. And papers. Example:
Abstract 1:
"Our conclusion is that the apparent gulf in factual beliefs between members of different parties may be more illusory than real."
Abstract 2:
"Partisan gaps in correct responding are reduced only moderately when incentives are offered, which constitutes some of the strongest evidence to date that such patterns reflect sincere differences in factual beliefs."
http://huber.research.yale.edu/materials/39_paper.pdf
http://themonkeycage.org/wp-content/uploads/2011/04/bullockgerberhuber.pdf?343c0a
I realize the dates on the papers are different, but the shifts seem very dramatic. Thoughts?
I scraped the last few hundred pages of comments on Main and Discussion, and made a simple application for pulling the highest TF-IDF-scoring words for any given user.
I'll provide these values for the first ten respondents who want them. [Edit: that's ten]
EDIT: some meta-information - the corpus comprises 23.8 MB, and spans the past 400 comment pages on Main and Discussion (around six months and two and a half months respectively). The most prolific contributor is gwern with ~780kB. Eliezer clocks in at ~280kB.
On BBC Radio 4 this morning I heard of a government initiative, "Books on Prescription". It's a list of self-help books drawn up by some committee as actually having evidence of usefulness, and which are to be made available in all public libraries. They give a list of evidence-based references.
General page for Books on Prescription.
The evidence, a list of scientific studies in the literature.
I have not read any of the books (which is why I'm not posting this in the Media Thread), but I notice from the titles that a lot of them are based on Cognitive Behavioural Techniques, which are generally well thought of on LessWrong.
The site also mentions a set of Mood-boosting Books, "uplifting novels, non-fiction and poetry". These are selected from recommendations made by the general public, so I would say, without having read any of them, of lesser expected value. FWIW, here's the list for 2012 (of which, again, I have read none).
"They speak very well of you".
-"They speak very well of everybody."
"That so bad?"
-"Yes. It means you can´t trust them."
Improving my social skills is going to be my number one priority for a while. I don't see this subject discussed too much on LW, which is strange because it's one of the biggest correlates with happiness and I think we could benefit a lot from a rational discussion in this area. So I was wondering if anyone has any ideas, musings, relevant links, recommendations, etc. that could be useful for this. Stuff that breaks from the traditional narrative of "just be nicer and more confident" is particularly appreciated. (Unless maybe that is all it takes.)
Optional background regarding my personal situation: I am a 19 yo male (as of tomorrow) who is going to enter college in the fall. I'm not atrociously socially inadept, e.g. I can carry on conversations, can be very bold and confident in short bursts sometimes, I have some friends, I've had girlfriends in the past. However, I also find it very hard to make close friends that I can hang out with one on one, I sometimes find myself feeling like I'm taking a very submissive role socially, and I feel nervous or "in my head" a lot in social interactions, among other things. Not to be melodramatic, but I find myself wishing a decent amount that I had more friends and was more popular.
Improving my social skills is going to be my number one priority for a while. I don't see this subject discussed too much on LW, which is strange because it's one of the biggest correlates with happiness and I think we could benefit a lot from a rational discussion in this area.
Discussion on lesswrong on that subject would most likely not be rational. Various forms of idealism result in mind killed advice giving which most decidedly is not optimized for the benefit of the recipient.
Stuff that breaks from the traditional narrative of "just be nicer and more confident" is particularly appreciated. (Unless maybe that is all it takes.)
Get out of your house, go where the people are and interact with them. Do this for 4 hours per day for a year (on top of whatever other incidental interactions your other activities entail). If "number one priority" was not hyperbole that level of exertion is easily justifiable and nearly certain to produce dramatic results. (Obviously supplementing this with a little theory and tweaking the environment chosen and tactics used are potential optimisations. But the active practice part is the key.)
No, no, no, this was a bad explanation on my part. No one told me that dancing lessons are bad idea per se... only that my specific learning style is.
This is what works best for me: Show me the moves. Now show me those moves again very slowly, beat by beat. Show me separately what feet do; then what hands and head do. Tell me at which moment which leg supports the weight (I don't see it, and it is important). When and how exactly do I signal to my girl what is expected from her. (In some rare situations, to get it, I need to try her movements, too.) I still don't get it, but be patient with me. Let me repeat the first beat, and tell me what was wrong. Again, until it is right. Then the second beat. Etc. Then the whole thing together. Now let's do the same thing again, and again, and again, exactly the same way. Then something "clicks" in my head, and I get the move... and since that moment I can lead, improvise, talk during dance, whatever. -- As a beginner I was blessed with a partner who didn't run away screaming somewhere in the middle of this. Later my learning became faster, partially because I learned to ask the proper questions. And I had a good luck to dancing tea...
These days PUA refers to so many things that I need to be more specific. The sources that helped me were "The Mystery Method" by Mystery, "How To Become An Alpha Male" by Carlos Xuma, "Married Man Sex Life" by Athol Kay. I would also recommend "The Blueprint Decoded" by RSD.
Yes, there are many sources that only tell you "do this, do that, and if it does not work, just do it again". I guess this is what most customers want: "Don't bother me with explanations, just give me a quick fix!" This is how most people approach everything. Well, if there is a demand for something, the market will provide a product. And these days it is a huge business. Ten years ago, it was more like geeks experimenting and sharing their results and opinions... a bit similar to Quantified Self today, just less scientific, and sometimes more narrowly focused.
Overcoming aversion to rejection, doing many approaches to convert given rates of success into greater absolute numbers, doing something extraordinary to stand out of the crowd... those are the fixes. Applied incorrectly they could be even harmful. (Receiving a lot of rejection can make you more r...
I needed fewer than 13 bits of evidence: http://lesswrong.com/lw/fp5/2012_survey_results/
I likely committed some level of base-rate fallacy though (regardless of what the truth turns out to be). Trans* is more available to me because I hang out in queer communities, and know multiple transgender people.
Suppose the placebo effects exists; if you believe you will get better, then you will indeed get recover.
B(h) -> h
Unfortunately, it only works if you believe you will get better - and it could be hard to see why it'd be rational to believe that in the first place. Fortunately, rationalists have a solution to this problem.
We're scientific sorts of people, so we believe in the placebo effect - that is:
B( B(h) -> h )
and we're also logical sort of people, so we believe lob's theorem
B( B(h) -> h ) -> B(h)
and hence we believe we'll be healed
B(h)
and hence, by the placebo effect,
h
And we're healed!
I have been constantly thinking recently: Your voice impacts a lot in your presentation, and it's one of those things that people generally take for granted. And it's not just your speak pattern and filler words that I'm referring to, but also intonation, fluency and so on. I would maybe risk saying that it can be as important as your appearance, or even more. (If you stumble every five or ten words, you can't really convey your ideas, can you?)
In this vein, is there a viable alternative for someone who wants to improve his own voice? I already thought about a voice acting tutor, but I generally prefer ways in which I could improve without having to pay a tutor.
A lifehack idea: using oxytocin to counteract ugh fields:
Ugh fields might be a form of an amygdala hijack.
Oxytocin is known to dampen amygdala's 'fight, flight or freeze' responses.
Oxytocin production is increased during bonding behaviors (e.g. parent-child, pets, snuggling / Karezza).
If 1, 2 and 3 are true, we could reduce the effect of an ugh field by petting a dog, hugging a baby or snuggling (but not orgasming) with a lover -- before confronting the task that induces the ugh field.
Disclaimer: I am not a brain scientist, so the terminology, logic and the entire idea may be wrong.
So on the topic of effective altruism, I've been thinking about the benefits of a "can't beat 'em? join 'em" type strategy for improving the world. Examples:
I'm wondering if CFAR ever tried to approach Rowling for a permission to get HPMoR monetized for charitable and transhumanist purposes, on whichever terms.
Probably safer to do that after HPMoR is finished. Otherwise there is a chance she would forward the letter to her lawyer, the lawyer would send a cease and desist letter to CFAR, and then what?
If the same thing happens after HPMoR is finished, it can be removed from web and shared among LW members in ways that give plausible deniability to CFAR. But you can't have plausible deniability while Eliezer continues to write new chapters.
It's a bit of a truism that you can't do micropayments to cover the true marginal cost of serving a webpage, adding a user to your service, or other Internet activities, because the gap between free and epsilon is psychologically larger than the gap between epsilon and a dollar. It occurs to me that this curious psychology seems to map onto a logarithmic utility in money: Clearly the difference between lim(x to zero)[log(x)] and log(epsilon) is larger than the difference between log(epsilon) and log(1) for any finite value of epsilon. I'm not sure if this actually explains anything, but I thought it was kind of neat.
I reviewed A Guide to the Good Life: The Ancient Art of Stoic Joy at my blog ("Modern Stoicism – The Good, the Bad, and the Ugly"). It's a philosophy book that's focused on being actionable, not just a historical survey, but I think it too-casually brushes off some of the unpleasant side effects of stoicism.
...The desire to employ your Stoicism on a higher difficulty setting, coupled with the habit of seeing other people as obstacles can make you care less about other people. You root for them to be worse then they are. I used to wish that a gi
RSS feeds for user's comments seem to be broken with the update to how they display on the page. To see how, just look at eg. http://lesswrong.com/user/Yvain/overview/.rss . It contains a bunch of comments from other people than Yvain. This is pretty annoying, hope it's fixed soon. I'm subscribed to tens of users' comment feeds and it's the main way I read LW. Today all of those feeds got a bunch of spurious updates from the new other-people-comments on everyone's comments page.
Also, some months back there was another change to userpages and it broke all...
The subset of people who are Anki users and members of the competitive conspiracy might be interested in the Anki high score list addon I wrote: Ankichallenge
This suggests that studies about partisan confusion about truth are overblown. I haven't had a chance to look at the actual paper yet, but the upshot is that this study suggests that while there is a lot of prior evidence that people are likely to state strong factual errors supporting their own partisan positions, they are substantially less likely to occur when people are told they will be given money for correct statements. The suggestion is that people know (at some level) that their answers are false and are saying them more as signaling than anything else.
Edit:Clarify
Has anyone on LW compiled a list of books/subjects to read/learn that basically gives brings you through all the ideas discussed on LW?
The sequences are the obvious answer, but it's nice to go into subjects a little more in-depth, plus the sequences are somewhat frustrating to navigate (every article in the sequences has links to plenty of other articles, so it's hard to attack the sequences in linear fashion).
The most linear way to read Eliezer's Sequences is in chronological order by date of original posting, although it might not be the best way.
The most linear way to read Eliezer's Sequences is in chronological order by date of original posting, although it might not be the best way.
Mind you it will be a good approximation of the best way. His posting order was dominated by needing to explain requisite knowledge before explaining later concepts. Perhaps the most obvious optimisation when it comes to reading is just skipping the parts that aren't interesting.
I want to improve my exposition and writing skills, but whenever I think "what do I know that I can explain to people that isn't explained well elsewhere?" not much comes to mind. I think that happens because it is hard to just do a search of everything that I know. The main topics that I know are math and rationality (mostly LW epistemic rationality, but also a little instrumental and LW moral philosophy). So I ask:
What is a topic in math or rationality that you wish were explained better or explained at a different level (casual, technical, etc...
I want to improve my exposition and writing skills, but whenever I think "what do I know that I can explain to people that isn't explained well elsewhere?" not much comes to mind
If improving your skills is your main goal, you should just write, regardless of whether better explanations already exist elsewhere. Actually, such explanations already existing could even be an advantage, as it provides you with feedback: after writing your own, you can look up existing ones and compare what you did better and what you did worse.
There are people studying the memetics of transhumanism academically. I am writing my Masters so I can't read it. But maybe someone else wants to... (sorry no easy link) http://www.tandfonline.com/doi/abs/10.1080/08949468.2013.754649#.UbhTvRUQ8gQ
I notice I am confused about Godel's theorem, and I'm hoping there are enough mathematically minded folks here to unconfuse me. :-)
My recollection from my undergraduate days is that Godel's theorem states that given any sufficiently powerful formal system (i.e. one powerful to encode Peano arithmetic) there are statements that can be made in that system that can neither be proven true nor proven false. I.e. the system is either incomplete or inconsistent, and generally incomplete is what seems to happen.
Here's what confused me: I've noticed several recent...
A bit of a long shot but I am a recent Psychology (BSc) Graduate who currently lives in London and is looking for a job. Does anyone know of any positions in the rationality sector (anywhere) or any science/research/anything else like that around London (or not) that I can look into? Or any other general advice, recommendations etc.
Does anyone know of a way to convert .anki files to .apkg files?
I recently started using anki, but most of the decks I downloaded are .anki, and can't be opened by ankidroid...
Naive evo-psych seems to imply that having a big family should make me more attractive, for two reasons: 1) it's evidence that my genes cause many surviving kids, 2) more people will share resources to help my kids survive. But that doesn't seem to work in real life. Why?
"Real life" doesn't even remotely resemble the ancestral environment. In the modern world, a big family is evidence about your cultural background, especially the relationship between your cultural background and contraception, and that might be a turn-off for some. This is the same kind of phenomenon that makes having extra fat evidence, in the ancestral environment, that you were good at acquiring food and other resources, but in the modern world it's evidence that you're poor or lack access to good food or lack self-control or whatever.
MoR and munchkining fans may enjoy this application to Railgun: http://www.reddit.com/r/anime/comments/1fpome/just_a_fanart_of_railgun_characters/cacmewm
From If Many-Worlds had Come First:
...the thought experiment goes: 'Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,' and at this point Schrödinger stops, and goes,
Why, then, don't more people realize that many worlds is correct?
Note that you are using Eliezer!correct, not Physics!correct. The former is based on Bayesian reasoning among models with equivalent predictive power, the latter requires different predictive power to discriminate between theories. The problem with the former reasoning is that without experimental validation it is hard to agree on the priors and other assumptions going into the Bayesian calculation for MWI correctness. Additionally, proclaiming MWI "correct" is not instrumentally useful unless one can use it to advance physical knowledge.
'hey, maybe at that point half of the superposition just vanishes, at random, faster than light'
It's worse than that, actually. In some frames it means not just FTL but also back in time. But given that this is unmeasurable, it matters not in the slightest if you adopt the Physics!correct definition.
Today I learned that there exist electromagnetic waves in vacuum with electric and magnetic fields parallel to each other. Freaky...
I'd like to put out a call for anecdata, if I may:
Lately I've been wondering how much of a causal connection there is between happiness/fulfillment and willpower (or, conversely, akrasia) levels. I feel like I'm not especially fulfilled or happy in my life right now, and I can't help but feel intuitively that this is one cause of the difficulty I seem to have in focusing, concentrating, and putting effort into what I want to. However, I've no idea whether there's actually anything in this.
So: I guess I wondered if anyone has any personal accounts of (medi...
I think I've noticed that I'm more willing to read long texts written in small font sizes than in large ones, and in sans-serif than in serif font.
I might try again to read A Gentle Introduction to Unqualified Reservations, but in a small, sans-serif typeface this time, to test this.
Anyone have a good idea of where to park an "emergency fund" type account, and especially resources that talk about this? Most of my money is sitting in a checking account right now, which I have realized is not so good, but I want to keep most of it liquid (and the remainder might not be enough to start an index fund account with Vanguard).
You only need an emergency fund if you do not have access to credit at reasonable terms. Investments you don't touch outside of emergencies coupled with open lines of credit should outperform excessive "emergency" savings. After all, lines of credit are typically free when you don't use or need them, while not getting the best rate of return on your savings isn't.
EDIT: I was reminded of a relevant saying: If you’ve never missed a flight, you’re spending too much time in airports.. Similarly, if you never have to borrow money for emergencies, your investments are too liquid.
It's been so long since I needed to use it that I've forgotten my Lesswrong password. Is there any password recovery function?
http://lesswrong.com/user/army1987/comments/ also shows the parents of comments now. Can I disable that? In my preferences there's an option whether to show them in http://lesswrong.com/comments which is unchecked.
Does anyone have anything to say or have any links regarding mortality salience, existentialism, or determinism as a source of motivation? Traditionally these are seen as a hindrance to motivation and may lead to fatalism and existential angst.
This previous post is the type of discussion I am looking for. Can confrontation of mortality and existential catharsis lead to motivation and hack akrasia?
I'm wondering if the following statement is true: The word "ought" means whatever we ought to believe that it means.
Now, certainly, that statement could be false. There could be a society whose code of ethics states that you must disagree with the code of ethics. But I'm asking whether or not it is false, for us actual humans. And it might be false if you take "we" to mean someone like Adolf Hitler: perhaps Hitler professed his actual beliefs about ethics, and people nowadays think Hitler was so horrible that if Hitler believed somethin...
I think this Noah Smith disquisition on "derp" might be a useful thing to refer people to when one gets tired of referring them to PITMK. It crystalizes for me why I find a lot of political commentary unbearable to read/listen to.
Of interest to folks close to Oxford only.
Max Tegmark will be giving a talk, "The future of life: a cosmic perspective”, on June 10 at 12:30pm. The event is open to the public and free of charge, and will take place on the Martin Wood Lecture Theatre, Department of Physics, 20 Parks Road, Oxford OX1 3PU (Google maps). More details here.
For those who believe that the US is a democracy in the sense that public policy is an aggregate of public opinion, how do you deal with the fact that 42% of the US population don't know that Obamacare is actually law?
If the population doesn't even know about the easy facts, how do you expect a democracy in which public policy is driven by public discourse to work?
(Usual informational hazard warning to attract attention. Warning before compulsory dedicating your attention: it's only the usual hazard. (Collective disappointed sigh))
Interesting smackd..., ah, discussion, between XiXiDu and Aris Kats Aris. If the link doesn't work, it's the Google+ discussion also linked to from the top of this blog post.
I'd really like it if someone could explain to me what Aaronson is saying here:
...I've often heard the argument which says that not only is there no free will, but the very concept of free will is incoherent. Why? Because either our actions are determined by something, or else they're not determined by anything, in which case they're random. In neither case can we ascribe them to "free will."
For me, the glaring fallacy in the argument lies in the implication Not Determined ⇒ Random. If that was correct, then we couldn't have complexity classes lik
People who are nervous and unsure about learning a new skill are often advised to "fake it till you make it." That has never been very helpful for me; I think I concentrate on the "fake" part too much, which just makes me nervous that any moment the jig will be up.
Anyway, there's an older, simpler way to express the same idea, which works much better: I'm practicing. It's not fake, it's just practice. Thinking that way makes me want to get back to work, instead of worry about getting caught; after all, I'm not doing anything wrong.
Fairly banal once I write it down, but it's been helpful.
If I pay $1000 in rent, about how much does my landlord profit after accounting for taxes and the costs of property management?
I have a question. Assume that non-profit MIRI develops a fAGI (I like the acronym this way). They realise they can use the fAGI to generate profit. Taking for granted they only wish to make enough profit to sustain the institution free of donor-support, would they then be able to switch to a for-profit institution, despite having created the fAGI while non-profit?
Everyone has found a way around the question; I asked it poorly, so I'll be clearer: if a piece of technology is developed at an institution dependent upon donations from private individuals t...
I have a question. Assume that non-profit MIRI develops a fAGI (I like the acronym this way). They realise they can use the fAGI to generate profit. Taking for granted they only wish to make enough profit to sustain the institution free of donor-support, would they then be able to switch to a for-profit institution, despite having created the fAGI while non-profit?
If MIRI (or anyone else) create an AGI that is friendly to them they can do whatever they goddamn please.
I scraped the last few hundred pages of comments on Main and Discussion, and made a simple application for pulling the highest TF-IDF-scoring words for any given user.
I'll provide these values for the first ten respondents who want them. [Edit: that's ten]
EDIT: some meta-information - the corpus comprises 23.8 MB, and spans the past 400 comment pages on Main and Discussion (around six months and two and a half months respectively). The most prolific contributor is gwern with ~780kB. Eliezer clocks in at ~280kB.
If I'm counting the replies correctly, nine respondents requested them so far. I'd like my word values. Thank you!