If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Silk Road drugs market shut down; alleged operator busted.
Bitcoin drops from $125 to $90 in heavy trading.
Edited to add: Well, that was quick. Doesn't look like the bottom fell out.
Edited again: Here's the criminal complaint against the alleged operator. The details at least make sense as a story: in the early days of Silk Road, the alleged operator had really lousy opsec, linking his name to the Silk Road project. Then later, he seems to have got scammed by a guy who first threatened to extort him, then pretended to be a hit-man who would kill the extortionist.
If anyone wants to read all the primary source documents, see http://www.reddit.com/r/SilkRoad/comments/1nmiyb/compiling_all_dprrelevant_pages_suggestions_needed/
I need some advice. I recently moved to a city and I don't know how to stop myself from giving money to strangers! I consider this charity to be questionable and, at the very least, inefficient. But when someone gets my attention and asks me specifically for a certain amount of money and tells me about themselves, I won't refuse. I don't even feel annoyed that it happened, but I do want to have it not happen again. What can I do?
The obvious precommitment to make is to never carry cash. I am strongly considering this and could probably do so, but it is nice to be able to have at least enough for a bus trip, a quick lunch or for some emergency. I have tried to give myself a running tally of number of people refused and when that gets to, say, 20, I would donate something to a known legitimate charity. While doing so makes me feel better about passing beggars by, it doesn't help once someone gets me one-on-one. So I've never gotten to that tally without resetting it first by succumbing to someone. Is there some way to not look like an easy mark? Are there any good standard pieces of advice and resources for this?
However, I always find these exchanges to be really fascinating from the ...
The basic answer is not to talk to these people.
Do not answer questions about what time it is, do not enter any conversations at all. At most say "sorry" and walk on.
Just. Do. Not. Talk. To. Them.
assume that they're scamming. It will often be true and even when honest giving money to panhandlers is an inefficient use of charity. Remind yourself that you already have a budget for charity and that you're sending it to givewell or MIRI or whatever.
An idea: Next time try to estimate how much money such person makes. As a rough estimate, divide the money you gave them by the length of your interaction. (To get a more precise estimate, you would have to follow them and observe how much other people give them, but that could be pretty dangerous for you.)
Years ago I made a similar estimate for a beggar on a street (people dropped money to his cap, so it was easy to stand nearby, watch for a few minutes and calculate), and the conclusion was that his income was above average for my country.
By the way, these people destroy a lot of social capital by their actions. They make life more difficult for people who genuinely want to ask for the time, or how to get somewhere, or similar things. They condition people against having small talk with people they don't know. -- So if you value people being generally kind to strangers, remember that these scammers make their money by destroying that value.
Interesting statements I ran into with regards to kabuki theater aspects of the so called United States federal government shutdown of 2013. This resulted in among other things closing down websites.
A website shouldn't just go down when the people managing it stop working, it's not like they're pedaling away inside the servers. Block the federal highways with army tanks, sorry the government is closed.
There is a nontrivial set of the voting public who legitimately believe money equals tech working via magical alchemy.
I was interested to know this kind of thing has a name: Washington Monument Syndrome.
The name derives from the National Park Service's alleged habit of saying that any cuts would lead to an immediate closure of the wildly popular Washington Monument.
As a sysadmin, if I were to be furloughed indefinitely I would probably spin down any nontrivial servers. A server that goes wrong and can't be accessed is a really, really, really, really terrible-horrible-no-good-very-bad thing. And things go wrong on a regular basis in normal times; when the government is shut down and a million things that get done everyday suddenly stop being done, something somewhere is going to break. Some 12-year-old legacy cron job sitting in an obscure corner of an obscure server written by a long-departed contractor is going to notice that the foobar queue is empty , which turns out to be an undefined behavior because the foobar queue has always had stuff going through it before, so it executes an else branch it's never had occasion to execute, which sends raw debugging information to a production server because the contractor was bad at things, and also included passwords in their debugging because they were really bad at things...
This is actually a terrible example of Washington Monument Syndrome.
" Hi, Server admin here... We cost money as does our infrastructure, I imagine a site that large costs a very good deal, we aren't talking five bucks on bluehost here.
I am private sector, but if I were to be furloughed for an indeterminate amount of time you really have two options. Leave things on autopilot until the servers inevitably break or the site crashes at which point parts or all of it will be left broken without notice or explanation. Or put up a splash page and spin down 99% of my infrastructure (That splash page can run on a five dollar bluehost account) and then leave. I won't be able to come in while furloughed to put it up after it crashes.
If you really think web apps keep themselves running 24/7 without intervention we really have been doing a great job with that illusion and I guess the sleepless nights have been worth it to be successfully taken for-granted."
I've heard several stories in the last few months of former theists becoming atheists after reading The God Delusion or similar Four-Horsemen tract. This conflicts with my prior model of those books being mostly paper applause lights that couldn't possibly change anyone's mind.
Insofar as atheism seems like super-low-hanging fruit on the tree of increased sanity, having an accurate model for what gets people to take a bite might be useful.
Has anyone done any research on what makes former believers drop religion? More generally, any common triggers that lead people to try to get more sane?
Edit: Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?
I can tell you what triggered me becoming an atheist.
I was reading a lot of Isaac Asimov books, including the non-fiction ones. I gained respect for him. After learning he was an atheist, it started being a possibility I considered. From there, I was able to figure out which possibility was right on my own.
This seems to be a trend. I never seriously worried about animals until joining felicifia.org where a lot of people do. I never seriously considered that wild animals' lives aren't worth living until I found out some of the people on there do. I think it's a lot harder to seriously consider an idea if nobody you respect holds it. Just knowing that a good portion of the population is atheist isn't enough. Once you know one person, it doesn't matter how many people hold the opposite opinion. You are now capable of considering it.
I didn't think unfriendly AI was a serious risk until I came here, but that might have been more about the arguments. I figured that an AI could just be programmed to do what you tell it to and nothing more (and from there can be given Asimov-style laws). It wasn't until I learned more about the nature of intelligence that I realized that that is not likely going to be easy. Intelligence is inherently goal-based, and it will maximize whatever utility function you give it.
Theism isn't about god. It has also social and therefore strong emotional consequences. If I stop being a theist, does it mean I will lose my friends, my family will become more cold to me, and I will lose an access to world's most wide social networks?
In such case the new required information isn't a disproved miracle or an essay on Occam's razor. That has zero impact on the social consequences. It's more important to get an evidence that there is a lot of atheists, they can be happy, and some of them are considered very cool even outside of atheist circles. (And after having this evidence, somehow, the essays about Occam's razor become more convincing.)
Or let's look at it from the opposite side: Even the most stupid demostrations of faith send the message that it is socially accepted to be religious; that after joining a religion you will never be alone. Religion is so widespread not because the priests are extra cool or extra intelligent. It's because they are extra visible and extra audacious: they have no problem declaring that everyone who disagrees with them is stupid and evil and will go to hell (or some more polite version of this, which still gets the message across) -- a...
I'm in the process of translating some of the Sequences in French. I have a quick question.
From The Simple Truth:
Mark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”
This is clearly a joke at the expense of some existing philosophical position called pan[something] but I can't find the full name, which may be necessary to make the joke understandable in French. Can anyone help?
In the past few hours, my total karma score has dropped by fifteen points. It looks like someone is going back through my old comments and downvoting them. A quick sample suggests that they've hit everything I've posted since some time in August, regardless of topic.
Is this happening to anyone else?
Anyone with appropriate access care to investigate?
To whoever's doing this — Here's the signal that your action sends to me: "Someone, about whom all you know is that they have an LW account that they use to abuse the voting system, doesn't like you." This is probably not what you mean to convey, but it's what comes across.
I got an offer of an in-person interview from a tech company on the left coast. They want to know my current salary and expected salary. Position is as a software engineer. Any ideas on the reasonable range? I checked Glassdoor and the numbers for the company in question seem to be 100k and a bit up. I suppose, actually, that this tells me what I need to know, but honestly it feels awfully audacious to ask for twice what I'm making at the moment. On the other hand I don't want to anchor a discussion that may seriously affect my life for the next few years at too small a number. So, I'm seeking validation more than information. Always audacity?
Always ask as much as you can. Otherwise you are just donating the money to your boss. If you hate having too much money, consider donating to MIRI or CFAR or GiveWell instead. Or just send it to me. (Possible exception is if you work for a charity, in which case asking less than you could is a kind of donation.)
The five minutes of negotiating you salary are likely to have more impact on your future income than the following years of hard work. Imagine yourself a few years later, trying to get a 10% increase and hearing a lot of bullshit about how the economical situation is difficult (hint: it is always difficult), so you should all just work harder and maybe later, but no promises.
it feels awfully audacious to ask for twice what I'm making at the moment
I know. Been there, twice. (Felt like an idiot after realising that I worked for a quarter of my market price at the first company. Okay, that's exaggerated, because my market price increased with the work experience. But it was probably half of the market price.)
The first time, I was completely inexperienced about negotiating. It went like: "So tell me how much you want." "Uhm, you tell me how much you give peop...
Don't deliberately screw yourself over. Don't accept less than the average for your position and either point blank refuse to give them negotiating leverage by telling them your current salary or lie.
For better, longer advice see [Salary Negotiation for Software Engineers](http://www.kalzumeus.com/2012/01/23/salary-negotiation)
I'm afraid I couldn't quite bring myself to follow all the advice in your link, but at any rate I increased my number to 125k. So, it helped a bit. :)
Look up what Ramit Sethi has to say about salary negotiation. He really outlines the how things look from the other side and how asking for your 100k is not nearly as audacious as it seems.
I would like to eventually create a homeschooling repository. Probably with research that might help people in deciding whether or not to homeschool their children, as well as resources and ideas for teaching rationality (and everything else) to children.
I have noticed that there have been several question in the past open threads about homeschooling and unschooling. One of the first things I plan to do is read through all past lesswrong discussions on the topic. I haven't really started researching yet, but I wanted to start by asking if anyone had anything that they think would belong in such a repository.
I would also be interested in hearing any personal opinions on the matter.
Homeschooling is like growing your own food (or doing any other activity where you don't take advantage of division of labor): if you enjoy it, have time for it and are good at it, it's worth trying. Otherwise it's useless frustration.
I couldn't agree more about division of labor in general, but with the current state of the public school system, I do not trust them to do a good job of teaching anything.
I do not have the time or patience for it, and probably am not good at it, but fortunately my partner would be the one teaching.
Mindkilling for utilitarians: Discussion of whether it would have made sense to shut down the government to try to prevent the war in Iraq
More generally, every form of utilitarianism I've seen assumes that you should value people equally, regardless of how close they are to you in your social network. How much damage are you obligated to do to your own society for people who are relatively distant from it?
How can I acquire melatonin without a prescription in the UK? The sites selling it all look very shady to me.
It's melatonin; melatonin is so cheap that you actually wouldn't save much, if any, money by sending your customers fakes. And the effect is clear enough that they'd quickly call you on fakes.
And they may look shady simply because they're not competently run. To give an example, I've been running an ad from a modafinil seller, and as part of the process, I've gotten some data from them - and they're easily costing themselves half their sales due to basic glaring UI issues in their checkout process. It's not that they're scammers: I know they're selling real modafinil from India and are trying to improve. They just suck at it.
If I make a target, but instead of making it a circle, I make it an immeasurable set, and you throw a dart at it, what's the probability of hitting the target?
If you construct a set in real life, then you have to have some way of judging whether the dart is "in" or "out". I reckon that any method you can think of will in fact give a measurable set.
Alternatively, there are several ways of making all sets measurable. One is to reject the Axiom of Choice. The AoC is what's used to construct immeasurable sets. It's consistent in ZF without AoC that all sets are Lebesgue measurable.
If you like the Axiom of Choice, then another alternative is to only demand that your probability measure be finitely additive. Then you can give a "measure" (such finitely additive measures are actually called "charges") such that all sets are measurable. What's more you can make your probability charge agree with Lebesgue measure on the Lebesgue measurable sets. (I think you need AoC for this though.)
In L.J. Savage's "The Foundations of Statistics" the axioms of probability are justified from decision theory. He only ever manages to prove that probability should be finitely additive; so maybe it doesn't have to be countably additive. One bonus of finite additivity for Bayesians is that lots of improper priors become proper. For example, there's a uniform probability charge on the naturals.
Topic: Investing
There seems to be a consensus among people who know what they're talking about that the fees you pay on actively managed funds are a waste of money. But I saw some friends arguing about investing on Facebook, with one guy claiming that index funds are not actually the best way to go for diversified investing that does not waste any money on fees. Does anyone know if there is anything too this? More specifically, are Vanguard's funds really as cheap as advertised, or is there some catch to them?
To find previous Open Threads, click on the "open_thread" link in the list of tags below the article. It will show you this page:
http://lesswrong.com/r/discussion/tag/open_thread/
For some reasons that I don't understand, the Special threads wiki page has a link to this:
http://lesswrong.com/tag/open_thread/
...but that page doesn't work well.
Am I mistaken, or do the Article Navigation buttons only ever take to posts in Main, even if I start out from a post in Discussion? Is this deliberate? Why?
Another PT:LoS question. In Chapter 8 ("Sufficiency, Ancillarity and all that"), there's a section Fisher information. I'm very interested in understanding it, because the concept has come up in improtant places in my statistics classes, without any conceptual discussion of it - it's in the Cramer-Rao bound and the Jeffreys prior, but it looks so arbitrary to me.
Jaynes's explanation of it as a difference in the information different parameter values give you about large samples is really interesting, but there's one step of the math that I just c...
There is too much unwarranted emphasis on ketosis when it comes to Keto diets, rather than hunger satiation. That might sound like a weird claim since the diet is named after ketosis, but when it comes to the efficacy of the Keto diet for weight loss with no regard to potential health or cognitive effects, ketosis has little to do with weight loss. Most attempts to explain the Keto diet almost always starts with an explanation on what ketosis is with an emphasis on attaining ketosis rather than hunger satiation and caloric deficit. Here is intro excerpt...
Yet another newbie question. What's the rational way to behave in a prediction market where you suspect that other participants might be more informed than you?
Here's a toy model to explain my question. Let's say Alice has flipped a fair coin and will reveal the outcome tomorrow. You participate in a prediction market over the outcome of the coin. The only participant besides you is Bob. Also you know that Alice has flipped another fair coin to decide whether to tell Bob the outcome of the first coin in advance. What trades should you offer to Bob, and wha...
Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by "noise" traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.
What's the LMSR prediction market scoring rule? We've just started an ad-hoc prediction market at work for whether some system will work, but I can't remember how to score it.
Say I have these bets:
House: 50%
Me: 50%
SD: 75%
AK: 35 %
what is the payout/loss for each player?
Does anyone have any short thought experiments that have caused them to experience viewquakes on their own?
Here's a twist on prospect theory.
I installed solar panels, which were pretty expensive, but pay back as they generate electricity.
The common question was "How long will it take to earn your investment back?" I understand why they're asking. The investment is illiquid, even more than a long-term bank deposit. But if wanted to get my money "back," I'd keep it in my checking account. The question comes from a tendency to privilege a bird in the hand over those that are still in the bush.
The important point they should ask about is my pr...
Sunk cost fallacy spotted in an unusually pure state at unusually high levels:
As the partial government shutdown enters its third day, many House Republicans are determined to keep fighting, even though they see no plausible way out of the current impasse, because they've come so far they cannot imagine backing down now. "I think there's a sense that for us to do a clean CR now -- then what the hell was this about?" one Republican House member told me. "So I don't think it's going to end anytime soon."
I find it quite possible that w...
The occasional phenomenon where people go downvote every comment by someone they disagree with could be limited by only allowing people to downvote comments made within the last week.
Or limit the number of votes one person can give to another within a time period. I think most vendetta voting happens in the heat of the moment. I don't like not being able to vote old comments, or skewing the voting on either side.
I always wondered if an algorithm could be implemented akin to the Page rank algorithm. A vote from someone counts more if the person votes seldom and it counts more if the person is upvoted frequently by people with high vote weight.
What's the relationship between Epistemology and Ontology? Are both important of attention or do you get the other for free when you deal with one of them?
I'm requesting recommendations for guides to meditation.
I've had great success in the past with 'sleeping on in' to solve technical problems. This year I've been trying power-napping during lunch to solve the morning's problems in the afternoon, I'm not sure the success of power-nap is any better that the control group. The next step is to see if I can step away from the old hamfisted methods and get results from meditation.
This may be treading close to a mindkilling topic, but - what's the scientific consensus on fracking?
Does anyone have a good resource on learning how to formate graphs and diagrams?
What are the effects on the reader between having 90%, 100% or 110% spacing between letters? When should one centralize text. What about bold and italics?
Is there good research based resource that explains the effects that those choices have on the reader?
An earthrise that might be witnessed from the surface of the Moon would be quite unlike moonrises on Earth. Because the Moon is tidally locked with the Earth, one side of the Moon always faces toward Earth. Interpretation of this fact would lead one to believe that the Earth's position is fixed on the lunar sky and no earthrises can occur, however, the Moon librates slightly, which causes the Earth to draw a Lissajous figure on the sky. This figure fits inside a rectangle 15°48' wide and 13°20' high (in angular dimensions), while the angular diameter of the Earth as seen from Moon is only about 2°. This means that earthrises are visible near the edge of the Earth-observable surface of the Moon (about 20% of the surface). Since a full libration cycle takes about 27 days, earthrises are very slow, and it takes about 48 hours for Earth to clear its diameter.
So the other week I read about viewquakes. I also read about things a CS major could do that aren't covered by the usual curriculum. And then this article about the relationship escalator. Those gave me not quite a viewquake but clarified a few things I already had in mind and showed me some I had not.
What I am wondering is now, can anyone here give me a non-technical viewquake? What non-technical resources can give me the strongest viewquake akin to the CS major answer? With non-technical I mean material that doesn't fall into the usual STEM spectrum peop...
More specifically: I connected to Quora using my Facebook account. When I connected, within the Quora system the message "Viliam is following your questions and answers" was sent to all Quora users who are also my Facebook contacts.
As far as I know, it didn't do anything outside of Quora. But even this is kinda creepy. I discovered it when one of those users asked me in a FB message why exactly am I following his questions (in given context, it seemed like a rather creepy action by me). I didn't even know what he was speaking about.
So the lesson is that if Quora later shows you announcements like: "XYZ is interested in your questions", it most likely means that XYZ simply joined Quora, and Quora knows you two know each other. (Also, you can remove the people you are following in Quora settings. You probably didn't even know you are "following" them, did you?)
I hate this kind of behavior, when social networks pretend their users have some activity among them, when in reality they don't. But I generalize this suspicion to all software. Whenever some software tells me: "Your friend XYZ wants you to do this, or tells you that", I always assume it is a lie. And if my friends XYZ really wants me to do something, they should tell me that using their own words outside of the system I don't know. For example by phone, email, or facebook (not auto-generated) message.
Wikipedia:
In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]
How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn't they essentially build an oracle AGI?
What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?
I saw this post from EY a while ago and felt kind of repulsed by it:
I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents?
Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called...
The question "which decision theory is superior?" has this flavor of "can my dad beat up your dad?"
CDT is what you use when you want to make decisions from observational data or RCTs (in medicine, and so on).
TDT is what you use when "for some reason" your decisions are linked to what counterfactual versions/copies of yourself decided. Standard CDT doesn't deal with this problem, because it lacks the language/notation to talk about these issues. I argue this is similar to how EDT doesn't handle confounding properly because it lacks the language to describe what confounding even means. (Although I know a few people who prefer a decision algorithm that is in all respects isomophic to CDT, but which they prefer to call EDT for I guess reasons having to do with the formal epistemology they adopted. To me, this is a powerful argument for not adopting a formal epistemology too quickly :) )
I think it's more fruitful to think about the zoo of decision theories out there in terms of what they handle and what they break on, rather than in terms of anointing some of them with the label "rational" and others with the label "irrational." These labels carry no information. There is probably no total ordering from "best to worst" (for example people claim EDT correctly one boxes on Newcomb, whereas CDT does not. This does not prevent EDT from being generally terrible on the kinds of problems CDT handles with ease due to a worked out theory of causal inference).
Immeasurable sets are not something in the real world that you can throw a dart at.
I can rephrase your problem to be: "If I have an immeasurable set X in the unit interval, [0,1), and I generate a uniform random variable from that interval, what is the probability that that variable is in X?"
The problem is that a "uniform random variable" on a continuous interval is a more complicated concept than you think. Let me explain, by first giving an example where X is measurable, lets say X=[0,pi-3). We analyze random continuous variables by reducing to random discrete variables. We can think of a "uniform random variable" as a sequence of digits in a decimal expansion which are determined by rolling a 10 sided die. So for example, we can roll the die, and get 1,4,6,2,9,..., which would correspond to .14629..., which is not in the set X. Notice that while in principle we might have to roll the die arbitrarily many times, we actually only had to roll the die 3 times in this case, because once we got 1,4,6, we knew the number was too big to be in the set X. We can use this fact that we almost always have to roll the die only a finite number of times to get a definition of the "probability of being in X." In this case, we know that the probability is between .141 and .142, by considering 3 die rolls, and if we consider more die rolls, we get more accuracy that converges to a single number, pi-3.
Now, let's look at what goes wrong if X is not measurable. The problem here is that the set is so messy that even if we we know about the first finitely many digits of a random number, we wont be able to tell if the number is in X. This stops us from doing the procedure like above and defining what we mean.
Is this clear?
EDIT: I retract the following. The problem with it is that Coscott is arguing that "something in the real world that you can throw a dart at" implies "measurable" and he does this by arguing that all sets which are "something in the real world that you can throw a dart at" have a certain property which implies measurability. My "counterexamples" are measurable sets which fail to have this property, but this is the opposite of what I would need to disprove him. I'd need to find a set with this property that isn't meas... (read more)