I've had a thought about a possible replacement for 'hyperbolic discounting' of future gains: What if, instead of using a simple time-series, the discount used a metric based on how similar your future self is to your present-self? As your future self develops different interests and goals, your present goals would tend to be less fulfilled the further your future self changed; and thus the less invested you would be in helping that future iteration achieve its goals.
Given a minimal level of identification for 'completely different people', then this could even be expanded to ems who can make copies of themselves, and edit those copies, to provide a more coherent set of values about which future selves to value more than others.
(I'm going to guess that Robin Hanson has already come up with this idea, and either worked out all its details or thoroughly debunked it, but I haven't come across any references to that. I wonder if I should start reading that draft /before/ I finish my current long-term project...)
Shane Frederick had the idea that hyperbolic discounting might be because people identify less with their future self. He actually wrote his dissertation on this topic, using Parfit's theory of personal identity (based on psychological continuity & connectedness). He did a few empirical studies to test it, but I think the results weren't all that consistent with his predictions and he moved on to other research topics.
There seem to be two broad categories of discussion topics on LessWrong: topics that are directly and obviously rationality-related (which seems to me to be an ever-shrinking category), and topics that have come to be incidentally associated with LessWrong to the extent that its founders / first or highest-status members chose to use this website to promote them -- artificial intelligence and MIRI's mission along with it, effective altruism, transhumanism, cryonics, utilitarianism -- especially in the form of implausible but difficult dilemmas in utilitarian ethics or game theory, start-up culture and libertarianism, polyamory, ideas originating from Overcoming Bias which, apparently, "is not about" overcoming bias, NRx (a minor if disturbing concern)... I could even say California itself, as a great place to live in.
As a person interested in rationality and little else that this website has to offer, I would like for there to be a way to filter out cognitive improvement discussions from these topics. Because unrelated and affiliated memes are given more importance here than related and unaffiliated memes, I have since begun to migrate to other websites for my daily dose ...
This is probably like walking into a crack den and asking the patrons how they deal with impulse control, but...
How do you tame your reading lists? Last year I bought more than twice as many books as I read, so I've put a moratorium on buying new books for the first six months of 2015 while I deplete the pile. Do any of you have some sort of rational scheme, incentive structure or social mechanism that mediates your reading or assists in selecting what to read next?
I've managed to partly transmute my "I want to buy that now" impulse into sending a sample to my kindle. Then if I never get past the first few pages I've not actually spent any money, if I reach the end of the sample and still want to continue I know I'm likely to keep going .
Some people think that the universe is fine-tuned for life perhaps because there exists a huge number of universes with different laws of physics and only under a tiny set of these laws can sentient life exist. What if our universe is also fined-tuned for the Fermi paradox? Perhaps if you look at the set of laws of physics under which sentient life can exist, in a tiny subset of this set you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring. If the natural course of events for sentient life in non-Fermi-tuned universes is for space faring civilizations to expand at nearly the speed of light as soon as they can, consuming all the resources in their path, then most civilizations at our stage of development might exist in Fermi-tuned universes.
I grew up thinking that the Big Bang was the beginning of it all. In 2013 and 2014 a good number of observations have thrown some of our basic assumptions about the theory into question. There were anomalies observed in the CMB, previously ignored, now confirmed by Planck:
...Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.
Furthermore, a cold spot extends over a patch of sky that is much larger than expected.
The asymmetry and the cold spot had already been hinted at with Planck’s predecessor, NASA’s WMAP mission, but were largely ignored because of lingering doubts about their cosmic origin.
“The fact that Planck has made such a significant detection of these anomalies erases any doubts about their reality; it can no longer be said that they are artefacts of the measurements. They are real and we have to look for a credible explanation,” says Paolo Natoli of the University of Ferrara, Italy.
... One way to explain the anomalies is to propose that the Universe is in fact not the same in all directions on a larger scal
Should we have some sort of re-run for the various repositories we have? I mean, there is the Repository repository and it's great for looking things up if you know such a thing exists, but (i) not everyone knows this exists and more importantly, (ii) while these repositories are great for looking things up, I feel that not much content gets added to the repositories. For example, the last top-level comment to the boring advice repository was created in march 2014.
Since there's 12 repositories linked in the meta repository as of today, I suggest we spend each month of 2015 re-running one of them.
I'm not certain which form these re-runs should take, since IMO, all content should be in one place and I'd like to avoid the trivial inconvenience for visitors clicking on the re-run post and then having to click one more time.
Should there be some sort of re-run of the 12 repositories during 2015, one per month? [pollid:808]
Which form should the re-run have, conditional on there being one? [pollid:809]
Crazy hypothesis:
If Omega runs a simulation of intelligent agents, it is presumable that Omega is interested in finding out with sufficient accuracy what those agents would do if they were in the real situation. But once we assign a nonzero chance that we're being simulated, and incorporate that possibility into our decision theories, we've corrupted the experiment because we're metagaming: we're no longer behaving as if we were in the real situation. Once we suspect we're being simulated, we're no longer useful as a simulation, which might entail that every simulated civilization that develops simulation theories runs the risk of having its simulation shut down.
I suppose the best thing to do is to tell you to shut up now, right?
This (your hypothesis) appears wrong, however. Assuming the simulation is accurate, the fact that we can think about the simulation hypothesis means that whatever is being simulated would also think about it. If there's an accuracy deficiency, it's no more likely to manifest itself around the simulation-hypothesis than any other difference in accuracy.
Although that depends on how we come by the hypothesis. If we come by it like our world did, which is philosophers and other people making argument without any evidence, then there's no special reason for us to diverge from the simulated, but if we would have evidence (like the kind proposed in http://arxiv.org/abs/1210.1847 or similar proposals) then we would have a reason to believe that we weren't an exact simulation. In that case, we'd also have evidence of the simulation and not been shut down, so we'd know that your theory is wrong. OTOH, if you're correct we shouldn't try to test the simulation hypothesis experimentally.
What are the marginal effects of donating with an employer gift match? The one I have has a per-employee cap and no overall cap, but presumably the utilization rate negatively influences the cap. How much credit should I be giving myself for the gifts I cause my employer to give?
If the notion of 'credit' is too poorly defined, suppose I were deciding between job A which has a gift match and job B which has a higher salary, such that (my personal gift if I take job A) < (my total gift if I take job B) < (my total gift including match if I take job A)...
I can code, as in I can do pretty much any calculation I want and have little problem on school assignments. However, I don't know how to get from here to making applications that don't look like they've been drawn in MS Paint. Does anyone know a good resource on getting from "I can write code that'll run on command line" to "I can make nice-looking stuff my grandmother could use"?
Runaway Rationalism and how to escape it by Nydwracu
One of the better rationalist short posts of last year, unfortunately mostly read by non-rationalists so far. Many important concepts tightly packed and neatly explained if the reader thinks closely about them.
A favored quote.
...It is noteworthy that the scientific method, the most successful method for discovering reality, only arose once, a few hundred years ago, in an environment where the goddess of war and wisdom demanded it. It is also noteworthy that the goddess of war is the goddess of wisdom: wit
At one point there was a significant amount of discussion regarding Modafinil - this seems to have died down in the past year or so. I'm curious whether any significant updating has occurred since then (based either on research or experiences.)
The comic book Magnus Robot Fighter #10, published this month, mentions Roko's Basilisk by name and has an A.I. villain who named himself after Roko's Basilisk. The Basilisk is described as "the proposition that an all-powerful A.I. may retroactively punish those humans who did not actively help its creation... thus inspiring roboticists, subconsciously or unconsciously, toinvent that A.I. as a matter of self-preservation". Which is not quite correct because of the "subconsciously", and doesn't mention simulation (although Magnus grew up in a simulation), but otherwise is roughly the right idea.
I'm not sure where an appropriate place to ask this is. Tell me if there's a place where this goes.
I'm coming to the Bay Area for the CFAR workshop from the 16th to the 19th. I have a way to get back home, but I think I might want to stay a few extra day in San Fransisco. That screws up my travel arrangements, so I'm seeing if there's a workaround. Are there any aspiring rationalists (or rationalist sympathizers) in northern California who might want to drive with me down to Phoenix (AZ) between the 23rd and the 25th, more or less for the hell of it? I'm u...
Below, gjm was being a self-acknowledged pedant and I didn't like it at first and I pedanted right back at him and then I realized I enjoyed it and that pedantry is a terminal human value and that I wouldn't have it any other way and that I didn't really care that he was being a pedant anymore and that it was actually a weird accidental celebration of our humanity and that I probably won't care about future pedantry as long as it isn't harmful. This is an auspicious day.
I think a good principle for critical people - that is, people who put a lot of mental effort into criticism - to practice is that of even-handedness. This is the flip-side of steelmanning, and probably more natural to most. Instead of trying to see the good in ideas or people or systems that frankly don't have much good in them, seek to criticize the alternatives that you haven't put under your critical gaze.
Quotes like [the slight misquote] "Democracy is the worst form of government except for all the others that have been tried from time to time&qu...
I'm vegetarian and currently ordering some dietary supplements to help, erm, supplement any possible deficits in my diet. For now, I'm getting B12, iron, and creatine. Two questions:
Is there a way to subscribe to the RSS feed for Less Wrong with MS Outlook? When I use the link on the sidebar, Outlook says the file name or directory is invalid.
I know a very intelligent, philosophically sophisticated (those are probably part of the problem) creationist. A full blown, earth-is-6000-years-old creationist.
If he is willing to read some books with me, which ones should I suggest? Something that lays our the evidence in a way that the layman can understand, and conveys the enormity of this evidence.
I suspect that I'm gonna keep sharing quotes as I read Superintelligence over the next few weeks, in large part because Professor Bostrom has a better sense of humor than I thought he would when I saw him on YouTube.
I've known for a long time that intelligences with faster cognitive processes would experience a sort of time dilation, but I've never seen it described in such an evocative and amusing way:
...To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000×. If your fleshly friend should happen
Is there a Chrome extension or something that will adjust displayed prices of online merchants to take into account rewards benefits? For example, if my credit card has 1% cashback, the extension could reduce the displayed price to be 1% cheaper.
So I signed up for a password manager, and even got a complex password. But how do I remember the password? It's a random combination of upper and lower case letters plus numbers. I suppose I could use space repition software to memorize it, but wouldn't that be insecure?
That comic makes a good argument against the kinds of alphanumeric passwords most people naively come up with to match password policies, but the randomized ones that a password manager will give you are far stronger. Assuming 6 bits of entropy per character (equivalent to a choice of 64 characters) and a good source of randomness, a random 8-character password is stronger than "correct horse battery staple" (48 bits of entropy vs. ~44), and 10 characters (for 60 bits of entropy) blows it out of the water.
Of course, since you typically won't be able to remember eight base64 characters for each of the fifty sites you need a password for, that makes the security of the entire system depend on that of the password manager or wherever else you're storing your passwords. A mix of systems might work best in practice, and I'd recommend using two-factor authentication where it's offered on anything you really need secured.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.