If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
What's a fun job for someone with strong technical skills?
I just graduated with a PhD in pure math (algebraic topology), I've done 50 Project Euler problems, I know Java and Python although I've never coded anything that anyone else uses. I'm looking for work and making a list of nonacademic job titles that involve solving interesting problems, and would appreciate suggestions. So far I'm looking at:
http://sub.garrytan.com/its-not-the-morphine-its-the-size-of-the-cage-rat-park-experiment-upturns-conventional-wisdom-about-addiction is an article about a change in perspective about how rats act when given access to a morphine drip.
Basic concept: When given a larger cage with more space and potential things and other rats to interact with, rats are much less likely to only use a morphine drip, as compared to when they are given a small standard lab cage.
Edit per NancyLebovitz: This is evidence that offers a different perspective on the experiments that I had heard about and it seemed worth sharing. It is not novel though, since apparently it was done in the late 70's and published in 1980. See wikipedia link at: http://en.wikipedia.org/wiki/Rat_Park
I'm guessing you had this in mind already, but to clarify anyway, there's a pretty major availability bias since anything celebrities are involved in is much more likely to be reported on, leading to a proliferation of news stories about celebrities with addiction problems.
On the other hand though, celebrities are a lot more likely than most people to simply be given drugs for free, since drug dealers can make extra money if their customers are enticed by the prospect of being able to do drugs with celebrities. And of course that's aside from the fact that the drug dealers themselves can be enticed by the star power and want to work their way into their circles.
I wonder why addiction is common among celebrities-- they aren't living in a deprived environment
I'm not so sure that's true. Being scrutinised 24/7 sounds like one hell of a constraint on my possible actions to me.
Oops. Upon review, I fell victim to a classic blunder. "Someone shared something on Facebook that I have not heard of before? It must be novel. I should share it with other people because I was unaware of it and it caused me to update my worldview."
Thanks. I'll edit the original post to reflect this.
Following up on a post I made last month, I've put up A Non-Technical Introduction to AI Risk, collecting the most engaging and accessible very short introductions to the dangers of intelligence explosion I've seen. I've written up a few new paragraphs to better situate the links, and removed meta information that might make it unsuitable for distribution outside LW. Suggestions for further improvements are welcome!
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
Does the average LW user actually maintain a list of probabilities for their beliefs?
Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
It isn't really possible since in many cases it isn't even computable let alone feasible for currently existing human brains. Approximations are the best we can do, but I still consider it the best available epistemological framework for reasons similar to those given by Jaynes.
If the former, what kinds of stuff do you have on your list?
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does?
People's brains can barely manage to multiply three-digit numbers together, so no human can do "Bayesian probabilistic reasoning". So for humans it's at best "the latter while using various practical tips to approximate the benefits of former" (e.g. being willing to express your certainty in a belief numerically when such a number is asked for you in a discussion).
So I found this research a while ago saying, essentially, that willpower is only limited if you believe it is - subjects who believed their willpower was abundant were able to power through tasks without an extra glucose boost.
I was excited because this seemed different from the views I saw on LessWrong, and I thought based on what I'd seen people posting and commenting that this might warrant a big update for some people here. Without searching the site, I posted about it, and then was embarrassed to find out that it had been posted here before a couple of years before...
What puzzles me, though, is that people here still seem to talk about ego depletion as if it's the only model of "willpower" there is. Is it that not everyone has seen that study, or is it that people don't take it seriously compared to the other research? I'm curious.
There's been a replication of that (I'm assuming you're talking about the 2010 paper by Job, Dweck and Walton). I haven't looked at it in detail. The abstract says that the original result was replicated but you can still observe ego-depletion in people who believe in unlimited willpower, you just have to give them a more exhausting task.
In this case, you might phrase it more as 'the asymptotics are the same, but believing in infinite willpower has a better constant factor'.
I recently made a big update in my model of how much influence one can have on one's longevity. I had thought that genetics accounted for the vast majority of variance, but it turns out the real number is something like 20-30%. This necessitates more effort thinking about optimizing lifestyle factors. Does anyone know of a good attempt at a quantified analysis of how lifestyle factors affect lifespan? Most of the resources I find make vague qualitative claims, as such, it's hard to compare between different classes of risks.
Punch genetics heritability longevity
into Google Scholar; first hit says:
The heritability of longevity was estimated to be 0.26 for males and 0.23 for females.
Does this imply that the other 75% is due to life choices? This doesn't obvious to me.
No, that is not what heritability means. The other 75% is the myriad of other influences of environment, chaotic chance and life choices.
Is there much value in doing psychological tests in any particular interval to catch any mental problem in its early stages even if one is not acutely aware of any problem?
Intellectual hygiene.
I am slowly coming to terms with the limits of my knowledge. Tertrium non datur is something that I should not apply outside of formal systems but always think or I could be wrong in a way I do not realize yet. In all my beliefs I should explicitly plant the seed of its destruction: If this event occurs I should stop believing in this or at least seriously doubt this.
A few years ago, in my introductory psych class in college, the instructor was running through possible explanations for consciousness. He got to Roger Penrose's theory of quantum computations in the microtubules being where consciousness came from (replacing another black box with another black box, oh joy). I burst out laughing, loudly, because it was just so absurd that someone would seriously propose that, and that other scientists would even give such an explanation the time of day.
The instructor stopped midsentence, and looked at me. So did 200-odd other students.
I kept laughing.
In hindsight, I think the instructor expected more solemnity.
I'm honestly not embarrassed by this story because it's "smug and disrespectful", I'm embarrassed because the more I stare at it the more it looks like a LWy applause light (which I had not originally intended).
"Hey Scott," I said. The technician was a familiar face, since I used the booths twice each day.
"Hey David," he replied. "Chicago Six?"
"Yup."
I walked into the booth, a room of sorts resembling an extremely small elevator, and the doors shut behind me. There was a flash of light, and I stepped out of the booth again--only to find that I was still at Scott's station in San Francisco.
"Shucks," said Scott. "The link went down, so the system sent you back here. So just wait a moment... oh shit. Chicago got their copy of you right before the link went down, so now there's one of you in Chicago, too."
"Well, uh... two heads are better than one, I guess?" I said.
"Yeah, here's what we do in this situation," said Scott, ignoring me. "We don't want two copies of you running around, so generally we just destroy the unwanted copy."
"Yeah... I guess that sounds like the way to go," I said.
"So yeah, just get back in the booth and we'll destroy this copy of you."
I stepped back into the booth again, and the doors closed. There was a fla--
Meanwhile, I was still walking to my office in Chicago, unaware that anything unusual had happened.
So... it turns out some people actually do believe that there are fundamentally mental quantities not reducible to physics, and that these quantities explain the behaviour of living things. I confess I'm a bit surprised. I had the impression that everyone these days agreed that physics actually does describe the motion of all the atoms, including those in living brains. But no, believers in the ghost in the machine walk among us, and claim that the motions of living things cannot be predicted even in principle using physics. Something to bear in mind when ...
I'm mystified that you thought everyone in the world is a materialist-reductionist. What on earth would make you believe that?
The typical mind fallacy, obviously!
But no, what surprised me was that people would seriously assert that "physics does not apply", and then turn around and say "no law of physics is broken".
I thought this was interesting: perhaps the first use I've read of odds in a psychology paper. From Sprenger et al 2013:
...8.1. A Bayesian analysis of WM training effectiveness
To our knowledge, our study is the first to include a Bayesian analysis of working memory training, which we view as particularly well suited for evaluating its effectiveness. For example, we suspect that at least some of the existing studies reporting positive transfer of WM training will fail the Bayesian “sniff test.” Indeed, even for studies that have faithfully observed statistic
Can blackmail kinds of information be compared to things like NashX or Mutually Assured Destruction usefully?
Most of my friends have information on me which I wouldn't want to get out, and vice versa. This means we can do favours for each other that pay off asynchronously, or trust each other with other things that seem less valuable than that information . Building a friendship seems to be based on gradually getting this information on each other, without either of us having significantly more on one than the other.
I don't think this is particularly original, but it seems a pretty elegant idea and might have some clues for blackmail resolution.
If you want to do something, at least one of the following must be true:
If a task is complicated (1 is false), then it consists of many sub-tasks, all of which are possible points of failure. In order to succeed at every sub-task, either you must be abl...
Is there much known about how to recall information you've memorised at the right time / in the right context? I can memorise pieces of knowledge just fine with Anki, and if someone asks me a question about that piece of information I can tell them the answer no problem. However, recalling in the right situation that a piece of information exists and using it -- that I'm finding much more of a challenge. I've been trying to find information on instilling information in such a way as to recall it in the right context for the last few days, but none of the a...
I have sorted 50 US states on such a way, that their Levenshtein string difference is minimal:
Massachusetts, Mississippi, Missouri, Wisconsin, Washington, Michigan, Maryland, Pennsylvania, Rhode Island, Louisiana, Indiana, Montana, Kentucky, Connecticut, Minnesota, Tennessee, New Jersey, New Mexico, New Hampshire, New York, Delaware, Hawaii, Iowa, Utah, Idaho, Ohio, Maine, Wyoming, Vermont, Oregon, Arizona, Arkansas, Kansas, Texas, Nevada, Nebraska, Alaska, Alabama, Oklahoma, Illinois, California, Colorado, Florida, Georgia, Virginia, West Virginia, South ...
I hope people do not mind creating me these. I live in a timezone earlier than American ones and I do periodical thread on another forum anyway so I am in the zone.
Are there resources for someone who is considering running a free local rationality workshop? If not does anyone have any good ideas for things that could be done in a weekly hour-long workshop? I was surprised that there weren't any free resources from CFAR for exactly this.
I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths). There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s pe...
If anyone got that microeconomics vs macroeconomics comic strip, feel free to explain... Possible related: inefficient hot dogs.
I am still confused about aspects of the torture vs specks problem. I'll grant for this comment that I would be willing to choose torture for 1 person for 50 years to avoid a dust speck in the eye of 3^^^3 people. Numerically I'll just assign -3^^^3 utilons to specks and -10^12 utilons to torture. Where confusion sets in is if I consider the possibility of a third form of disutility between the two extremes, for example paper cuts.
Suppose that 1 paper cut is -100 utilons and 50 years of torture is -10^12 utilons so the expected utility in either case is...
I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths). There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s perception of the world.
Are there any existing ontologies on machine intelligence, and is this something worth exploring now to test on paper?