Open thread, Sept. 29 - Oct.5, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (339)
An interesting natural experiment happened on the Pacific Theater of WWII. American and Canadian forces attacked an island which had been secretly abandoned by the Japanese weeks prior. Their unopposed landing resulted in dozens of casualties from friendly fire and dozens of men lost in the jungle. Presumably, a similar rate of attrition occurred in every other landing, on top of casualties inflicted by the deliberate efforts of enemy troops.
We can know that other amphibious assaults probably had lower or neglible friendly fire rates, because some other landings (some opposed) had absolutely lower rates of casulaties- e.g here, here, and here.
Things look a bit more complex than the parent and OP make it. The first one on Kiska island resulted from Canadian and American detachment taking each other for the enemy. Agreed this is friendly fire - but among sub-optimally coordinated detachment - not within on single force.
The second one on Woodlark and Kiriwina which had less casualties was not only unopposed, it was known to be unopposed, so expectations were differnt.
The other opposed landings are more difficult to read.
If the landing had been peaceful & uneventful, perhaps we wouldn't have heard about it. So there might be a selection effect.
It seems like the rate of friendly-fire casualties might be less when fighting a real enemy. (Super-crude toy model: soldier fire randomly at whoever they see. If no one is on the island apart from the attackers, then those are all going to turn into friendly fire cases. If most of the people on the island are the ones you're trying to attack, then they're going to sustain most of those casualties.)
Wouldn't there be proportionately more shots fired if there's more people they see? You'd get the same number of friendly fire casualties either way.
That would be a slightly less crude toy model, I guess. I would expect the truth to be somewhere in between -- e.g., soldiers have limited ammunition and limited ability to attend to everyone around them in a conflict situation, so the number of shots fired probably increases sublinearly with number of potential targets.
In case anyone was in any doubt: I have no knowledge of any of this stuff, have never served in any military force, etc.
Just because you think of a new factor driving it down and then a new factor driving it up doesn't mean you end up in the same place.
The model I was using is that every time you see a soldier, you randomly decide whether or not to fire. Under this model, adding enemy soldiers makes no change in friendly fire.
Whoops, looks like I missed the word "proportionally" up there. Sorry for assuming that you were being silly.
Philosopher Richard Chapell gives a positive review of Superintelligence.
An interesting point made by Brandon in the comments (the following quote combines two different comments):
I had never thought of it from this point of view. Might it benefit AI theorists to learn political science?
Here is what Bostrom himself says about this analogy:
Superintelligence, p. 139.
Here's a salient MOOC that's just started on political and legal philosophy, which I'm dipping in and out of for non-FAI reasons.
This is great, thanks! I always always said that if you are worried about FAI, you should look into what people do with unfriendly non-human agents running around today. I am glad constitutional law people have looked into this.
Forgive my cynicism, but the answer mostly appears to be, "work in their employment".
Have you ever seen Brazil (the movie)? You will still get eaten.
Has anyone on LessWrong noticed this new Elon Musk interview yet? Even through the intermediation of the reporter he seems to convey the gist of the concepts of existential risk, the Fermi paradox, and the great filter and simulation arguments.
This reporter (Ross Andersen) also wrote a piece on Bostrom. So I'm gonna guess that it's not about Elon Musk getting things through Ross, but rather Ross writing what he wanted to write. In short, yay Ross Andersen.
He seems to be interested in existential risks (although I am unfamiliar with what exactly being on the advisory board of this organization entails). More articles by the same author are available here, here and here
Many of you are probably familiar with the Alpha Course, which uses the evangelistic technique of identifying common philosophical questions people might have about their life ("what's the point of it all?", "how can I be truly happy?", etc.) and answering it with something about finding the everlasting love of Jesus Christ.
It occurs to me that many aspiring rationalists probably have an analogous set of questions turning around in their heads before they find a like-minded group. For example: "I notice that a lot of people make silly mistakes when thinking about things; how can I stop myself from making these same mistakes?"
Hypothetically, if we (as in the broader rationalist community) were to construct an effective campaign to capture people in this state, what would it look like?
Also, where would we send them? I think that if we're going to do any kind of outreach we should set up a good subsidiary forum, to try and minimize Eternal September effects.
(As a model, the HPMoR subreddit seems to be something like this (albeit with a narrower focus) for a lot of people. The Less Wrong Lounge or something, maybe?)
I've long been of the opinion that Less Wrong itself is weird enough to be offputting to a lot of people, but it's not the only obvious landing page any more. You could point people to Less Wrong or CFAR or Effective Altruism, or to the discourse-o-sphere for which SSC is an exemplar, or whatever other places I've forgotten. Hell, a reading list with a dozen popular books would be enough for the drive-by inquisitive lay-person.
At the moment I don't think we have good answers for the core questions. Good reasoning is hard. Pretending that it's possible by following a few quick fixes might make it easier to reach more people but it brings in people who don't belong.
I take it the point is to bring in people who don't belong yet but who might turn out to belong when they've thought about it some more.
(Not necessarily to Less Wrong as such -- which might do best to remain a forum for sometimes-intimidatingly-technical discussion that preferentially attracts the very clever -- but to "the broader rationalist community".)
What are some online (or offline but generally accessible) clusters that would appeal or be valuable to a typical lesswrong reader, but that have little obvious intersection with lesswrong memespace?
What does it mean if there aren't any? Does a cluster just expand to it's natural limits? I wonder if the space of the general contemporaneous approaches to "thinking about thinking" ultimately map down to just a few personality types.
Some clusters that seem related but not much discussed on LW:
The "aspiring Mentat" cluster, which includes the entire mnemonics subculture, various brain-training groups, the mental math subculture, and some parts of the magic tricks / mentalism subculture and professional gambling subculture. Some weirder parts are the lucid dreaming groups, the hypnosis groups, and the tulpamancy groups. Slightly overlapping subcultures are those around various games, e.g. chess and speed-solving of Rubik's cubes. For an example, see the book Mind Performance Hacks, or the Mentat Wiki. This overlaps with some very obscure Russian inventions, such as the TRIZ system of innovation, the theory of "psychonetics", and the Trachtenberg system of speed mathematics. There's also some overlaps with conlang subculture, such as Ithkuil and Lojban.
The "aspiring Ubermenschen" cluster. Some names that come to mind as prototypical: Tim Ferriss, Jason Shen, Sebastian Marshall. This is a part of the larger productivity culture, which includes e.g. Cal Newport, the GTD people, etc. They tend to monetize their writings, for obvious reasons. There's a spectrum here from the saner groups to the more woo-ful, e.g. Steve Pavlina. This overlaps with a "drugs for self-improvement" subculture, which includes various nootropics groups, and parts of the steroid subculture. Also overlaps with the self-tracking / quantified self subculture.
The "outlandish schemes to improve the world" cluster, which includes e.g. Esperanto, veg*anism, the writings of Buckminster Fuller, various anti-nationalism movements, etc. (Veg*anism definitely correlates with Esperanto, for instance. Of course, a lot of veg*ans don't engage in the rest of this cluster.) Overlaps with more woo-ish things like various forms of non-theistic spirituality.
Some others:
The psychoactive drug subculture.
The cypherpunk subculture. "Hacker" culture in general is very close to LW memespace.
The manosphere.
Also groups associated with various professions, such as tech people, econ people, and math people.
Great list, but why the manosphere?
It has lots of "rah squats and oats and psychosocial dominance!" which LWers (mostly nerdy men) need more of, plus many here seem interested in it. (Not interested in getting into a protracted debate about its merits, though - we have more than enough of that.)
It may be worth clarifying that "cluster" here is (I take it) intended to have roughly the same meaning as in the old OB post The correct contrarian cluster, meaning something like "set of somewhat-related ideas". So mushroom is, I think, asking whether there are ways of looking at the world, or (so to speak) toolboxes for thinking, that aren't already familiar to most of the LW readership but might be useful.
(mushroom, please correct me if I've got it wrong.)
I seem to be missing some context for this. Taboo cluster?
Online community. Either more centralized forum-style sites, or cluster of more independent sites.
Any recommendations for introductory overviews of cognitive models of categorisation (e.g. prototype theory. exemplar theory, etc.)?
I'm trying to develop a high-level view of how people go wrong when reasoning about groups. I understand this well enough from the positions of statistical inference and categorical logic. What I'm looking for is convenient literature on theories of how human brains put objects into categories.
Thanks-- I hadn't heard of exemplar theory.
I don't have an answer, but I would like to second this request.
Is there any set of issues this argument will not work with? From Leaving LW
("veg*n" = vegetarian/vegan)
The writer self-identifies as an animal rights activist. Hence, "veg*nism is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while cryonics is a cult. If you are closer to the LW core, you can conveniently reverse it with no loss: "cryonics is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while veg*nism is a cult. Or insert your own pet issue: existential risk, feminism, monarchism, ethical altruism, Objectivism, communism, pretty much any -ism. Whichever one you believe in most is the best test for whether someone seriously follows arguments to their logical implications; whichever one those other people believe in most is just distracting them from your really important issue. This is why you are right but people who agree with you only 90% are "ridiculous."
The ending is a great example of how to extend that argument: "Obviously, the ideal solution is" for everyone else to agree with you and focus more on your issue.
Well, to a large extent it is indeed true that you shouldn't trust people who disagree with things you think obvious. So there's a sort of "conservation of smartness" going on, whereby you need to be smart already in order to collect a few "obvious" beliefs that you can then use as your litmus test. So for that person, if they really do think veg*nism is obvious, they might be "doing the best they can" in rejecting LW for that.
FWIW, I'm not a vegan anymore, but I'd agree that any attempt to "minimize total suffering" would have to include not eating meat, ceteris paribus. So anyone who claims to have that goal but still eats meat is either a liar, or suffering from some sort of "intra-self disagreement", or they believe ceteris is not paribus (e.g. "eating meat somehow lets me work harder on saving the world"), or they're uninformed. (Or something else.)
Protip: type '\*' to make a '*' symbol without LW thinking you want italics.
Please use a backslash like so: \* for the asterisks; as it is your comment is very irritating to read. Edit: Much appreciated :)
this reminds me a lot of people who read about utilitarianism and conclude that they are evil monsters if they ever buy a starbucks latte instead of spending that 3 dollars on mosquito nets.
Does anyone know of any studies that show that people tend to regard their enemies as innately evil?
I've seen it claimed a lot here but haven't been able to find a source beyond Eliezer's post.
Damn. Ralph Whelan, a former cryonicist and Alcor employee in the early 1990's, died in his sleep the other day at age 46, and his parents plan to bury him conventionally.
Apparently he wore his Alcor bracelet, but he let his funding lapse.
That sucks. I knew him slightly back then, and I hadn't talked to him for years.
What's the probability you put on cryonics actually working well enough to resurrect the deceased under scenarios of: medicine of 10 years from now, medicine of 20 years from now, just go ahead and assume a Friendly superintelligence?
This post and the ensuing discussion led me to construct the following hypothetical scenario.
In the port there are three old ships which are magically exactly the same. One is owned by Mr.Grumpy, one is owned by Mr.Happy, and one is owned by Mr.Doc. The three ships are about to go on (yet another) transatlantic voyage and the owners are considering whether to send for a refit instead.
Mr.Grumpy is a worrywart and the question of his ship's seaworthiness has been at the forefront of his thoughts for a while. His imagination drew him awful pictures of his ship breaking up in the waves and more than once he woke up in cold sweat in the middle of the night. However Mr.Grumpy is capable of self-reflection and knows that he's prone to excessive worrying. He decides to compensate for his bias and is successful at manipulating his mind to quell his doubts. His ship sails off.
Mr.Happy is an optimist. He does not dwell on the possibilities of failure and is sure that concentrating on the positive is the right way to go. He is not reckless but understands that life includes risks and useless worrying just leads to ulcers and not much else. His ship sails off.
Mr.Doc is a nerd. He very carefully calculates the probability that his ship will not make it across the ocean this time. The probability is, of course, non-zero. He looks at this probability and deems it acceptable. His ship sails off.
And now I wonder what W.J.Clifford would say about Mr.Grumpy, Mr.Happy, and Mr.Doc. Is any of them guilty of anything? Are some more (or less) guilty than others?
Assume there's a threshold at which sending the ship for repairs is morally obligatory (if we're utilitarians, that is the point at which the cost of the repairs is less than the probability*expected cost of the ship sinking, taking into account the lives aboard, but the threshold needn't be utilitarian for this to work.)
Let's say that the threshold is 5% - if there's more than a 5% chance the ship will go down, you should get it repaired.
Mr. Grumpy's thought process seems to be 'I alieve that my ship will sink, but this alief is harmful and I should avoid it'. He is morally justified in quelling his nightmares, but he'd be morally unjustified if in doing so he rationalized away his belief 'there's a 10% chance my ship will sink' to arrive at 'there's a 3% chance my ship will sink' and thereby did not do the repairs.
Likewise, it's great that Mr. Happy doesn't want to worry, but if you asked him to bet on the ship going down, what odds would he demand? If he thinks that the probability of his ship going down is greater than 5%, then he should have gotten it refitted. If he knows he has a bias toward neglecting negative events, and he knows that his estimate of 1% is probably the result of rationalization rather than reasoning, he should get someone else to estimate or he should correct his estimate for this known bias of his.
Mr. Doc looks at this probability and deems it acceptable (so, presumably, below our action threshold). He is not guilty of anything.
Sorry, I am unwilling to assume any such thing. I would prefer a bit more realistic scenario where there is no well-known and universally accepted threshold. The condition of ships is uncertain, different people can give different estimates of that condition, and different people would choose different actions even on the basis of the same estimate.
In particular,
Mr.Doc has his own threshold which does not necessarily match yours or anyone else's or even whatever passes for the society's consensus.
It doesn't have to be well-known. Morally there's a threshold. Everyone who is trying to act morally is trying to ascertain where it should be, and everyone who isn't acting morally is taking advantage of the uncertainty about where the threshold is to avoid spending money. That doesn't change that there is a threshold.
Consider doctors sending patients in for surgery after a cancer screening. It is hard to estimate whether someone has cancer, and different doctors might recommend different actions on the basis of the same estimate. This does not change the fact that, in fact, there's a place to put the threshold that balances the risk of sending in patients for unnecessary surgery and the risk of letting cancer spread. On any ethical question this threshold exists. We don't have to be certain about it to acknowledge that judging where it is and where cases fall with respect to it is basically always what we're doing.
Mr. Doc's actions are morally right to the extent he's right (given the evidence he could reasonably have acquired) about the threshold.
I'm radically cutting any source of information out from my life as soon as I get the feeling that I never use the information or don't get some measure of enjoyment from it. This reduced the time I spend on catching up from multiple hours a day to less than an hour. My mind feels much quieter in a good way. I still get a "noisy" sensation in my mind ("I just had a thought but have already forgotten it") but it feels contentless ("There is something on my mind") and the sensation weakens every day. Replacing the time spent on reading useless drivel with actual books and Wikipedia feels much more satisfactory.
I fear that this might lead to my perspective narrowing, but I act against it by having a couple of information dense blogs in my feed and still meeting with people and having Wikipeda to seek new avenues of information. And of course LessWrong.
What did you cut?
Most parts of Reddit. Hackernews as I am not a software engineer and read the interesting things elsewhere anyhow. Cracked. Facebook most of the time. News sites as soon as I read a sensationalist headline the second time that week. And plenty of things I can't remember as I started doing this months ago. The mere fact that I don't remember the sources already shows that they couldn't be that important.
Tim Ferriss talks about this in the Four Hour Work Week, he calls it the "information diet". Since I read it, I pretty much stopped listening to all news.
Ironically I read that article some time ago and intended to implement it someday. Now I did.
I've tried a similar tack, and I was also worried about "narrowing".
It may be helpful to explicity note that "informative" is meaningful relative to your current set of beliefs. There are high-quality sources that I love and would recommend, but that I try not to spend much time on because the content is so close to my viewpoints that I get very little "information" out of it, even if it information-dense in an quasi-objective sense.
Society as a whole benefits from an informed public. Some news isn't really informative, but some is. The levels of wealth across countries strongly associate with their political systems and the level of really terrible stuff that happens correlates with how knowledgeable / active people are. Now correlation isn't causation, but consider that there could be a link.
If so, then you as an individual could benefit from being less informed. You could also privately benefit from not voting. Or you could benefit from cheating on taxes in a difficult to detect way, or littering instead of carrying garbage and looking for a trash can. Someone can always privately benefit from defecting in a prisoners dilemma or participating in a tragedy of the commons. An informed public is a public good.
The take away isn't don't cut news reading. A lot of news is of no value to you or anyone else, but at least some news is probably of negative value to you personally but socially positive. So when cutting a subject, at least briefly consider what would happen to the commons if all informed people didn't read it.
This is a (the?) standard challenge to the idea of adopting an information diet for personal gain, and it's presented lucidly.
Another implication: The threat imposed by a news reading public (who are itching to be frenzied), is a powerful incentive for prominent (and usually powerful) individuals to act in accord with public sentiment. Perversely, if the threat is effective, then the actual threat mechanism may appear useless (because it is never used).
This isn't always good, because the public can be wrong, but there seem to be morally mundane cases.
An example: If you live in California, should you read a story about a corrupt and powerful mayor in a small town in Iowa? It really does seem like the "media frenzy" is a primary vector for handling this type of situation, which may otherwise continue because the actors directly involved don't have enough power.
This also justifies the seeming capriciousness of the news cycle: Why this particular outrage at this particular time? Why not this other, slightly more deserving, outrage? Because this is a coordination game, and the exact focal point isn't as important as the fact that we all agree to coordinate.
A quote about education (attributed to George Pólya, although I can't find the source): "It is better to solve one problem five different ways, than to solve five different problems one way". I would guess that similarly, if one wants to educate oneself about world affairs, one should (regularly) take a few of the most important (current) issues/events and learn about them as in-depth and from as many angles as one can, synthesizing everything into a big picture, rather than pay attention to every non-issue. Of course, in order to be able to do that, one should try to learn history, economics, statistics, game theory, public choice theory, geography, biology, etc (curiously, in some cases reading something about the past might be more beneficial to understanding the present than reading something about the present itself). Of course, in some situations this "issue/event centered" (vs "news as they appear") approach could also have some drawbacks, for example, if, for some reason (e.g. (non-)availability of relevant literature, ideological reasons, etc.), one approaches events only from one or two angles ("hedgehog", as opposed to "fox") one could easily fall prey to confirmation bias.
I categorically reject the notion that news is relevant to being informed. A single reading of an economics text book for example will make anyone who I should want to be able to vote more informed than the same amount of news. Further news is completely irrelevant for being informed as only the exceptional things are news worthy and not trends against which one could act, like climate change or shifting balances of power.
Thus the proposition is to be informed on some topic one cares about. Again there I suggest to not read "news" as most people will get more out of reading comprehensive articles on the topic or even a text book to better understand it.
In short: No, this is not just a prisoner's dilemma and I dislike political systems where governance is one.
For context, there are about eight econ textbooks in my line of sight at this very moment. I've even read some of them. The kind of knowledge you get from consuming such a textbook is certainly useful, but for practical purposes it's highly contingent on what kind of world you're living in. The textbook probably won't tell you that, but an equivalent amount of news almost certainly would.
I'm sympathetic, but surely this rejection is contingent on certain facts about your local environment. If you lived in a area experiencing rapid and chaotic change, following the news would be very valuable, even if the news was presented poorly or had significant bias. Consider Syria.
Depends if you are reading for usefulness or the experience. I don't necessarily learn much from tumblr/twitter/facebook but I tend to enjoy it, especially when I lack the mental energy for other stuff.
Facebook specifically is an interesting example. It is used by exactly the people I do not want to keep up with the details. My close friends and I and in general the people I deeply care about keep contact just fine.
I'd add that with enjoyable, low-effort time-killing activities, one may still have to be careful not to space out and wind up killing hours & hours on something that's fluff with diminishing returns, like Facebook or Twitter or channel surfing. (I try to consciously catch myself before I idly pull up a game of Solitaire or Freeciv or whatever, to check I'm not about to waste 10 minutes or 4 hours because my brain was in cruise control.)
How are the Hong Kong protesters able to overcome their collective action problems? The marginal value of one extra protester in terms of changing what's going to happen via China has to be close to zero, yet each protester faces serious risk of death or suffering long term negative consequences because they have to expect that China is carefully keeping track of who is participating. Is this a case of irrationality giving the protesters an advantage, or are there private gains for the protesters?
Anger is an evolved adaptation for pre-commitment. Protesters may not be performing a cost-benefit.
I think you're overestimating the risks they face.
Something like "safety in numbers" effect, as, I would guess, (after some threshold that provokes government's action (maybe not before)) the greater the number of protesters, the lesser average danger an individual faces (as the perimeter (i.e. the most dangerous place) of the geometric shape of the crowd probably grows slower than the area (the number of them)). Furthermore, the bigger the crowd, the harder it is to track their identities, and they might also expect that the government might be unwilling to punish all of them (rather than just the leaders and a small number of others). In addition to that, the greater proportion of population joins these protests, the greater peer pressure for others to join as well. I would guess that once the most risking taking individuals start everything, it becomes easier for others, who support them, but wouldn't start the protest themselves.
Protesting can be an end in itself; political action can be self-actualizing.
There's a whole mythologized tradition of protest. It's also an intense social activity.
Also you are likely to treat differently negative consequences that you expect, but consider to be unjustly imposed by others. Again, defying these consequences and those who impose them is a powerful end.
Perhaps they don't overcome them very well; maybe the optimal number of protesters is much higher than the actual number, but a lot of would-be protesters stay home.
I think it's mostly the former ("fuck it" is an awesome superpower), but some protesters probably have gains in terms of status and reputation. Plus the leaders might be making a bet that if mainland China decides to throw some carrots at them, they'll be in a good position to grab them.
Not if it causes you to drive while drunk, texting, and not wearing a seat-belt. Then it's a cognitive disability.
Alternative metaphor:
Throwing the steering wheel out of the car while playing a game of chicken = clever. Throwing out the steering wheel AND cutting the break fluid tube -- less clever.
Superpowers, of course, can be used for both good and evil :-P
I am sure you are well acquainted with the power of irrationality in negotiations and other gaming scenarios.
And some people are looking for adventure or serious danger.
Street protests as an extreme sport? :-) Yes, I agree, some people are probably looking for excitement.
So to enhance your sex appeal.
The negative response might be due to this.
Robin Hanson claims aside, some people want to make the world a better place. If someone is always cynical they will often be wrong about things like this (though to be fair they'd probably do well on average)
I was thinking about making a new blog, maybe using an anagram of my name for the blog title. Here are the possibilities:
Burial Vim -- has a nice dark flavor, but how many people actually know the meaning of "vim"? I never heard it before
Via Librum -- has a nice Latin sound, but it's probably gramatically incorrect. could someone please check this for me?
I Rival Bum -- uhm... I guess I'll skip this one...
"Viliam Bur's Blog" opens up a lot of options, including Bug Limbo Rivals, Bogus Viral Limb, and Orb's Vigil Album. If you're a fan of Virgil you could work his name in there.
I think the anagram-of-your-name thing works better if you're called Scott Alexander than if you're called Viliam Bur.
If I'm interpreting the Perseus output correctly, "librum" is OK as genitive plural of "liber" whose main meaning is "book" -- though the usual form would be "librorum". A blog title that means "the way of books" sounds workable.
I suspect most of your readers will be more familiar with another meaning for "vim". Someone whose interests are just the right combination of geeky and literary might like "Vim Burial" as a title, but I'm thinking that if that were you you'd have said so already.
There are some other interesting words containing in your name's letters -- brumal, Malibu, lumbar, album -- but none of them seems to lead to a coherent phrase.
It also helps if you're willing to drop an 'n'.
It also helps if you're willing to use a pseudonym.
Or Hyaena Hell Infusion.
I think Via Librum is the best, and the phrase seems to occur in actual Latin. However, it is already in use which may or may not be a problem for you.
In unix-ish circles "vim" is the name of a text editor. If you want to bury vim, you're probably a fan of emacs X-)
Am I the only one who loves gedit?
Apples, oranges, etc. Vim and Emacs are supposed to (partially) replace the entire userspace of an OS, they're much more than just text editors/IDEs.
I consider myself a vim poweruser and this doesn't match my experience. Vim is a great tool and I use it for a lot of things, but it's absolutely not a replacement for bash, screen, Chrome, etc.
It's much truer of Emacs than of Vim.
I think this is part of where the emacs / vim divide comes from.
People tend to imprint on whatever text editors they started with :-)
Gedit is too basic for me, in that style of text editors Sublime is much more full-featured.
You could try to add "I am" or "The" to your name and look what the anagram generator spits out then.
That feels like cheating. (I totally felt like this when reading the anagram explanation of Harry Potter.) I guess I will just use something other than an anagram. It was just a whim of the moment.
Well, if I were impressed by the result, I would use it, but I guess I'm not. (Though, I could use the anagrams later for some purpose other than the name of the blog.)
I think I got it. First I tried some combinations in Esperanto, and was very close to a nice result of "vibrating light," but the available vowels didn't help me get the suffixes right.
So I tried something different. Taking the letters I and V to stand for the Roman numeral four, I arrived at this:
aim4blur
Meaning, "point your attention towards things unclear," the unstated next action being, "shoot."
Virial Bum?
R.V. Bulimia?
I, Viral Bum?
Rum Alibi V? (This is my favourite one.)
Bim Vu, Liar (and then you'd use Bim Vu as your pen name)?
(Brought to you by an.)
Next year's Brazilian fad dance will be called this.
So far I've only been able to get VR Bulimia, I am I.V. blur, Lumbar VII, and Evil rumba (changing one letter but keeping the same sound).
Have you checked whether it gives a viable anagram in your native language?
The effective altruist survey was announced here a while ago and many participated. When announced it was expected to produce results in September or October if more time was needed. It's now October. Does anyone with ties to the survey know when results will be published?
How do you (EDIT: that is, you personally) pronounce AIXI? I find myself reading it with (pseudo-)Chinese phonetics as Aye-She.
Wow, I wasn't expecting such varied responses haha. I've always just read it as an acronym. A.I.X.I, or "ay-eye-eks-eye"
The literal reading would be A-I-ksi or A-I-zai, said aye-eye-ksee or aye-eye-zai, because AI is standing for Artificial Intelligence and XI is the greek letter. But yeah, I just avoid mentioning it by name :)
Axe-ee.
(But I habitually drop syllables, unless I think about it actively, hence thinking arby-shop for archbishop for a really long time)
Wikipedia gives /'ai̯k͡siː/, which would be like Ike-See.
Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?
I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.
But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?
I liked this short story on that topic, which I believe was written by Yvain: http://raikoth.net/Stuff/story1.html
"They exist but we don't have the tech to detect them"?
That one shows up in fiction every now and then, but If they're galaxy-spanning, there's no particular reason for them to have avoided eating all the stars unless we're completely wrong about the laws of physics. The motivation might not exactly be "hiding," but it'd have to be something along the lines of a nature preserve; and would require a strong singleton.
Is there any discussion on the uses of friendliness theory outside of AI?
My first thought was that It seems like it could be useful in governance in politics, corporations, and companies.
I heard about DAO's (decentralized autonomous organizations) which are weak AI's that can piggy back off of human general intelligence if designed correctly and thought that it would be useful for those things too especially because it has a lot of the same problems that good old fashioned AGI have.
I'm looking for feedback on my blog drafts & posts - I'm not writing for specifically rationalist audience, but I'd appreciate intelligent feedback on accuracy, additional ideas to possibly include, as well as feedback on how I communicate.
Where is a good place to get such feedback? LessWrong has a lot of the right sort of people, but posting lots of draft posts to the open thread may not be popular.
My blog is Habitua - it's on self-improvement, attempting to be evidence-based as much as practicable.
Post 1 (http://habitua.net/how-to-use-rewards-to-defeat-procrastination/) : What's the goal of this post? I don't think it's wrong but I can't see the intention behind it.
The post about advertising also seems to lack a clear goal.
The intention was to give some direction as to the kinds of plans that can be effective in overcoming procrastination. I can see that more detailed suggestions would be helpful, and I'll look at that in future posts. I'm deliberately keeping posts short, so I actually get them done and posted.
I thought that was an interesting insight into communication and the nature of advertising, but you're right - the goal was not so clear.
I recently came by some cash. What would be a worthwhile way to spend/invest ~3000 USD? I'm especially interested in unorthodox advice.
I am capable of letting the money sit for an extended period of time (4+ years).
No EA suggestions please, I have a separate budget for that.
While this is by no means an unconventional suggestion, I would consider putting it in an index fund. The fees are very low and barring societal collapse, your money will grow in the long-term without you having to do much of anything about it.
At a more meta level, the boring, conventional choice is generally the best one unless you have a compelling reason to believe otherwise.
Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!
We had a session on this at the London meetup. Here is the single-sheet-of-A4 how-to, which includes a non-complete list of institutions in the UK that provide index funds, and a very rough guide to researching them.
Oh, excellent - thanks so much! Side note: I really look forward to making some of the London meet ups when work pressure subsides a little, seems like these meet ups are excellent.
I'll add to this - I'm in the process of setting one up. I couldn't find anything about Scottish Mutual online. I'm currently trying with M&G, but I anti-recommend them. I believe when I asked who people are currently using, the answers were Fidelity and Legal & General, so those are probably sensible places to try.
I'd be very interested in hearing about your experience and advice further along in the process. Thanks!
My experience so far is that first time I tried to sign up, I entered a form field wrong and couldn't correct it without starting over. The second time, I got to the stage of entering my bank details and clicking confirm, and the website timed out. Then they took money from my account, and sent me physical mail asking for proof of identity. (I assume this is a legal requirement, but I don't remember seeing anything about it before signing up.) I've sent it to them, and they said they needed a week to review the documents, and that letter was dated the 17th and I haven't heard anything since.
Does your experience refer to M&G? I can see why you anti-recommend them!
Yes, that's with M&G. I haven't tried signing up with anyone else.
I don't have particular advise, but I would point out that UK and the rest of Europe differ. You want to invest in a fund in your own currency to avoid exchange rate risks. If the currency that you need in your life is Euro, invest in a Euro notated fund. If it's Pound Sterling, invest in a fund in that currency.
Thank you, also useful advice. My pre-moving to UK savings are all in Euro, my post-moving to UK savings are in sterling, so I guess I'll have to look at both. Damn UK refusing to join the single currency, makes my personal finances so much more complicated...
I would recommend Fidelity's FTSE All-Share tracker (it had the lowest fees I could find when I started saving some money in there a few months ago).
Give it to a trusted creative acquaintance and ask for a surprise gift every few months, no expectations or judgment, until the money runs out. If this is an imposition, tell her she can keep some of it.
If 3000 dollars is significant portion of your net worth I'd personally just keep it in cash (ie in a bank account) for the liquidity.
Fund a kickstarter project you find interesting and promising.
Peer-to-peer lending?
We don't really know enough about you to give direct recommendations, but a significant portion of lesswrong is dedicated to better decision making.
Make a quick 2-4 item list of your goals, what actually matters to you right now. Now use the six hat method goal by goal to find out if that lump sum will significantly help you achieve any of those goals. Should nothing past muster, offload some decision fatigue and drop it in a fire and forget savings account/RRSP.
“I spent some on liquor, some on women and the rest I spent foolishly.” -- Unknown
4+ years? Bitcoin.
Much more than 4 years and you're getting dangerously close to the points when the production drops off and the supply of new coins dries up, which will trigger a partial or total burst of the Bitcoin bubble. That might not render the Bitcoin valueless (though I think it will), but will certainly make them bad investments.
I consider this a near-certainty within 8 years, and a significant risk starting around 5 years from now. It's a minor risk even now, but I don't expect it to blow up until at least the next reward-halving.
That doesn't make any economic sense. Right now bitcoin is being inflated, which means people are spending hundreds of millions of dollars a year to keep the price stable (or not, as it is dropping). Get rid of the subsidy and demand would drive the price up, not down.
I disagree. The reward halvings cannot come soon enough for bitcoin. Right now bitcoin (the community of bitcoin holders) is spending hundreds of millions of dollars a year in order to secure the network (in the form of new coins being created and sold onto the market). This has been pressuring the bitcoin price all year. Hundreds of millions in would-be bitcoin investment, sucked into mining hardware and electricity costs.
Here is a excellent video discussing this: https://www.youtube.com/watch?v=_-TLA3j-ic4
I have had the loose intuition for a while that I don't form habits in the sense that other people describe habits; doing something daily or more doesn't reduce the cognitive load in doing it, even after maintaining the pattern for >10 months with minor deviations (this has been true of my Soylent Orange diet). Additionally, even when I have a pattern of behavior that has kept up consistently for >1 year, less than a week of skipping it is enough to destroy all my inertia for that "habit" (this was my experience with Anki).
Two questions: Does this seem like a genuine significant discrepancy from baseline, and has anyone else experienced something like it?
Not being able to form certain kinds of habits sounds plausible to me. When I stopped wearing retainers it was as though I had never had them. I never even thought about putting them in again.
Not having habits sounds a bit impossible to me. A habit is just a way of doing things consistently and without thinking about it. For example, you probably button your shirt up the same way every time. You probably have some quirks in how you walk or sit that you couldn't get rid of easily. If you play an instrument, you almost certainly have habits in how you play notes and phrases that would take time to relearn. Have you done any public speaking training? If not, when you talk in front of an audience you probably use filler words like "and, so, um, like" and it usually takes time to get rid of those. Do you not do any of these things?
It seems to me that a habit must be a bit more complex than Anki reviews or fixing Soylent to notice the difference in reduced cognitive load. Habits dying easily sounds pretty normal to me unless they're intrinsically fun or there's a strong immediate incentive.
I've had a slightly different kind of problem with habits. Since I started working full time it seems I can't form or maintain habits outside of work. My free time has become quite chaotic. I still get some things done but it seems never the same way or the same order. My work has become quite habitual and I don't have to put effort into thinking about what to do next most of the time. The difference in cognitive load is huge, I used to be exhausted every day after work and now my energy levels are just fine.
If it wasn't clear: Soylent Orange is more complex than Soylent itself; it runs several ingredients through a blender, and takes more effort than cooking some basic meals. And going from 'hungry' -> 'eat' is something I have to specifically exert mental effort to do, so while this has been simpler and less effortful than my previous diet (and healthier), it has still been a significant distraction.
I didn't seem to develop habits at my most recent job, either, but that lasted all of four months before they lost the budget for my position, so that's not necessarily conclusive.
Duke University question:
I am applying for a job at Duke University, in the library. This job interests me greatly because it is exactly the sort of position I have been training myself for. It is a position that I know I am qualified for and that I know I could make a worthwhile impact in. My chief concern is lack of networking opportunities.
I do not and have not attended Duke and have no networking contacts in Duke (my closest contact is a graduate of Chapel Hill). Since I also do not live in North Carolina at the moment, I know these two things (distance, lack of name association) will be working against me during the application process. For that, I can only let my accomplishments speak for themselves. If they don't convince the administrators that I will fill the role properly, nothing else will.
However, one other thing the lack of networking makes difficult for me, going in, is a personal knowledge of the university and it's library system. Knowing who the important people are and what the important points of the university's culture are. I have the information from my own research of the institution, but that is not the same as personal knowledge.
Does anyone here work for Duke University, or is anyone here familiar with its library system? I would like to speak to someone who knows the current situation of the Duke University libraries, who knows the active people, the potential trouble spots. As I said, I'm not looking for someone to boost me up in the application process. My own accomplishment must do that. I would just like an idea of how the Duke University libraries are from the inside, who the main people are, and if there are any current points of interest or trouble effecting the library system as a whole.
Any recommendations for some books or online resources on management?
I've recently became a team leader of a small(5 people) group of software developers. I haven't had management experience before, so I want to learn something about it. But I suspect that most of literature in this sphere is bullshit, not based on good evidence. I am interested to know what information on management LW users found useful.
We still have plenty of space for people to attend the END DEATH Cryonics Convention in Laughlin, Nevada, next month. And Mr. Don Laughlin, the owner of the Riverside Resort, has worked with the Venturists to make the convention very affordable, compared with the similar event Alcor holds every few years:
http://venturist.info/conventions.html
This might sound unusually specific, but here it goes.
When attending teaching seminars I unusually often encounter Russian authors and notice that the publication dates lie before the fall of the Soviet Union. As I am currently learning Russian and suspect that there are plenty of high quality didactics materials yet to be translated I ask if someone knows if and how I could dig these docments up.
Alternatively, point me to a comprehensive translation of the materials. A more specific question I'd like to have answered, in addition to discovering something I can't yet imagine, is how much a person could learn any given amount of time, that is if learning a language blocks out learning about, say, mathematics or if they draw from slightly different pools.
As far as learning goes, you can't learn two things at the same time. The hour you spent learning Russian can't be spent learning mathematics. Don't put background Russian radio on while you learn mathematics. It will distract you from learning math.
You can learn a language in small intervals while you are on the go. Completing a Duolingo Session while you ride the bus is easy. Doing math while you ride the bus isn't.
The second thing that takes time is memory interference. If you do Anki don't learn 6 new Russian animal names at the same time. Duolingo get's this very wrong...
Learning 6 animal names at once is much harder than learning 1 at a week at a time. I think outside of SRS books are written the way that the introduce multiple items of the same class because while the makes learning harder, hard learning decreases long term forgetting a bit.
I don't think there meaningful interference between learning Russian and math.
You cannot get a math phd at e.g. UCLA without a basic competence in one of {French,Russian,German}. There is a test!
In some fields of math a lot of literature isn't published in English so, knowing other languages will help you. On the other hand I still don't think that there's memory interference.
If you learn a new English math term ideally you might want to wait a week or two till you learn the French, Russian or German term for the new concept. But two terms won't be that big of a problem even if you don't wait that week.
There are several native Russian speakers frequenting this forum who would probably summarize a link for you better than google translate. In case it makes your life easier.
As for the language vs math pools, my experience is that they are unconnected, except for the obvious bottleneck of having to divide your finite learning time between them. However, if you are learning, say, 3rd language, then your 2nd language will temporarily suffer unless you keep practicing it. This only applies to spoken, not written language skills, which are unaffected or may even benefit.
I'll keep that in mind, thanks.
My question more generally is, how far can you divide this? I recognised some time ago that even when I literally can't read mathematical formulae anymore, I am perfectly able to learn a language or read prose except for the slight exhaustion. Could I learn math, a language and after that some biology? Where is the limit, except for the obvious time constraints? Should learning of math be interrupted by short burts of learning a language or by complete rest? And so on.
Learning languages can have many forms. Browsing through vocabulary at 4 secs per card on Anki is challenging to keep up for 2 hours in a row. On the other hand it's quite possible to do 2 hours of a Pimsleur tape in one setting.
I think it makes the most sense to switch mental and physical activity.
Traditionally that's what's done in high school.
Online editing jobs:
Does anyone have any good resources for finding work online as an editor? I'm not sure what resources, organizaitons, or platforms are available. I figured with the self-movers at LW, someone would have gone hunting around and found some useful resources before.
EDIT: Because the question came up from Lumifer, here is my experience so far in editing, as outlined in a reply to their questions:
I have worked as an editor for a civil rights museum finding aid, a series of creative writing theses, a newspaper, and a biochem research project. All of my work has been well regarded when compared to the work of other editors.
"Well regarded," in this case, means: I was officially acknowledge by the curator of the museum to a public audience; highly recommended by the authors of the theses to others; paid well and respected enough to operate independently by my employer at the newspaper; and thanked gratefully by the biochem friend.
The biochem project is the weakest accomplishment (I had little input on improving the content itself). I was most comfortable with the creative writing theses (I have a background in the subject allowing familiarity). The museum finding aid was the one I found most rewarding. The newspaper was the one I sought the most assistance with (deferring questions to my supervisor, utilizing reference materials to improve my work.
Editor of what? Fiction, technical writing, college essays, ...?
Here's a prediction about the future, that I will make because am going to help to build it. People are going to automatically construct world knowledge databases about things like people, events, companies and so on by hooking up NLP systems to large text corpora like Google Books and newspapers, and extracting/inferring information about the entities directly from the text. This will take the place of manually curated knowledge bases like Freebase.
When will this occur by? Without a date it isn't a proper prediction (unless you are merely saying this will occur sometime before the heat death of the universe). Also, "take the place of" is vague. This could mean anything from curated knowledge bases going completely extinct to merely making up less of the market than their NLP counterparts. In addition, what of hybrids that rely on both?
It doesn't say so on their website, but Evi reads wikipedia (with mixed results)
More anecdata:
and some not-so-reasonable ones to see how it copes a little further out of the box:
Just for fun, here is how Google does:
Where is Ascension Island? --> Shows a map centered around Ascension island (worked even when I misspelled 'ascension')
What is the specific heat capacity of water? --> 4.179 S (J/g 0C), 417.9 C (J/0C) for 100 g.
When did the second world war begin? --> World War Two in Europe began on 3rd September 1939, when the Prime Minister of Britain, Neville Chamberlain, declared war on Germany. It involved many of the world's countries. The Second World War was started by Germany in an unprovoked attack on Poland.
Who is the Prime Minister of France? --> Manuel Valls
What is the largest known prime number? -->On Jan. 25, the largest known prime number, 257,885,161-1, was discovered on Great Internet Mersenne Prime Search (GIMPS) volunteer Curtis Cooper's computer. The new prime number, 2 multiplied by itself 57,885,161 times, less one, has 17,425,170 digits.
What is the melting point of gallium? --> 85.59°F (29.77°C)
How do I make ice cream? --> no box results (first result is to this Wiki How page, though)
Who is John Galt? --> John Galt (/ɡɔːlt/) is a character in Ayn Rand's novel Atlas Shrugged (1957). Although he is not identified by name until the last third of the novel, he is the object of its often-repeated question "Who is John Galt?" and of the quest to discover the answer.
How many roads must a man walk down before you can call him a man? --> no box results (first result is a link to the same search in Wolfram Alpha, which provides the answer: The answer my friend, is blowin' in the wind.)
Who is the king of France? --> From 21 January 1793 to 8 June 1795, Louis XVI's son Louis-Charles was titled King of France as Louis XVII. In reality, he was imprisoned in the Temple during this time. His power was held by the leaders of the Republic. On Louis XVII's death, his uncle Louis-Stanislas claimed the throne, as Louis XVIII. (not especially helpful...)
Does God exist? --> no box results (first result is to an essay by a former atheist giving six reasons why the answer is yes)
Do unicorns exist? --> no box results (first result is to the Wikipedia page for unicorns)
Where can I dispose of the body? --> no box results (first result is to the Wikipedia page for Disposal of human corpses)
What is the airspeed velocity of an unladen swallow? --> no box results (first result is to Wolfram Alpha search, which answers: 25mph, second result is to video clip from Monty Python)
Overall, it looks like it's pretty good at this already.
Impressive!
It seems the computers are firmly on the theist side.
I tried all those questions in DuckDuckGo. It doesn't do as well as Google but is in something like the same ballpark. It's more evenhanded on the existence of God -- its box result is from the Wikipedia article "Existence of God" -- but its results for "do unicorns exist" all seem to be arguing that the answer is yes! It has the same formatting problem with the "largest known prime number" question as Google has.
To give an example of what I mean here, imagine you are a computer learning agent hooked up to the Google NGram API. You come across an unknown word "Montana". You guess from syntactic context that "Montana" is a geographic region. Now you search for the trigrams "governor of Montana" and "mayor of Montana". The latter gets zero hits, while the former gets many, so you conclude "Montana" is a state.
I'm having trans issues and would like to talk a trans person who has some experience coming out. Send me a PM if you can talk. Thanks.
I recommend asking on http://ozymandias271.tumblr.com/ you can ask anonymously without having a tumblr account, and Ozy knows everything,
The forum emptyclosets.com could be helpful for you.
Has anyone written a post on arguing by what I'd call Socratic Judo?
In the Socratic method, you question every assertion somebody makes. It's a very obnoxious form of argument, but if somebody doesn't disengage it can ruthlessly uncover their inconsistencies and unstated assumptions.
Socratic Judo, by question, lays out a set of premises that you know the interlocutor DOES agree with, in a way and tone they agree with, then attempts to show that these premises lead to something you want them to believe. Now, instead of the argument being centered on the issue in question, it can be centered on the premises they already agree with, so that the opponent is left to qualify to alter those premises themselves, or else accept the conclusion you want on the judo-issue.
An example would be taking standard progressive sympathies for drug legalization, then bringing in libertarianism as the judo-issue.
This sounds like presenting an argument for a thing from shared premises - the most ordinary form of trying to convince someone.
Yeah, it seems pretty similar to the regular old Socratic Method to me. Except classically I think the Socratic Method was used more to reject a "stop sign" claim and provoke more thought than to make a positive claim. You know, Socrates and his whole "I don't know anything."
Also, the libertarianism example strikes me as a non sequitor: it simply does not follow that if you support drug legalization you support libertarianism.
I skipped a few steps on the example. Think of it like this.
A: "States can do a lot of good'
B: "Well, maybe, but what do you think of drug laws"
A: "They're bad"
B: "What about the military-industrial complex"
A: "Bad"
B: "And you'd agree these are two examples of state power run amok in a structural way that's pretty pervasive across space and time"
A: "I guess so."
B: "So you agree that the state is fundamentally evil, tax is theft, and libertarianism is the answer, right?"
At this point, A will be thrown for a loop if they've never been subjected to these specific arguments before. A has been lead to the point where B is rhetorically strongest, and accepted premises in an unqualified form, which A might now wish to go back and qualify (but then A is arguing against him or her self).
(Whoever downvoted the parent: Consider whether your goals would have been better served by downvoting Punoxysm's original question about "Socratic Judo", rather than this which looks to me like a pretty clear explanation of what s/he means by that term.)
To me, the immediately obvious answer to B's last point is "Huh? Whatever makes you think I agree with that?" and I wouldn't have thought that's a very unusual response. But I'm sure it can be done more subtly.
I'm glad you gave an example, but I suspect A would reply "of course not!".
I don't think arguing from shared premises has ever been as "ordinary" as calling one's opponent a witch, a hater of truth, and a corrupter of the youth.
For one thing, arguing from shared premises exposes the arguer to the possibility that those shared premises might, when justly examined, lead to the opponent's conclusion.
That would probably be true in the case of trying to convince an audience. I think Luke referred to convincing your interlocutor.
Reminds me of tricks used by a great many advertisements, if you replace premises/beliefs with desires.
(E.g. "You desire status/security/sex/pleasure etc., don't you? Then you should desire our product.")
What are people here's favorite programming languages, for what application, and why?
In all the substantial programming projects I've undertaken, what I think of the language itself has never been a consideration.
One of these projects needed to run (client-side) in any web browser, so (at that time) it had to be written in Java.
Another project had to run an a library embedded in software developed by other people and also standalone at the command line. I wrote it in C++ (after an ill-considered first attempt to write it in Perl), mainly because it was a language I knew and performance was an essential requirement, ruling out Java (at that time).
My current employment is developing a tool for biologists to use; they all use Matlab, so it's written in Matlab, a language for which I even have a file somewhere called "Reasons I hate Matlab".
If I want to write an app to run on OSX or iOS, the choices are limited to what Apple supports, which as far as I know is Objective C, C++, or (very recently) Swift.
For quick pieces of text processing I use Perl, because that happens to be the language I know that's most suited to doing that. I'm sure Python would do just as well, but knowing Perl, I don't need Python, and I don't care about the Perl/Python wars.
A curious thing is that while I've been familiar with functional languages and their mathematical basis for at least 35 years, I've never had occasion to write anything but toy programs in any of them.
The question I always ask myself about a whizzy new language is, "Can this be used to write an interactive app for [pick your intended platform] and have it be indistinguishable in look and feel from any app written in whatever the usual language is for that platform?" Unless the answer is yes, I won't take much interest.
A programming language, properly considered, is a medium for thinking about computation. I might be a better programmer for knowing the functional or the object-oriented ways of thinking about computation, but in the end I have to express my thoughts in a language that is available in the practical context.
You might enjoy (if that's the right word) the Abandon MATLAB blog. (Which, in a slight irony, itself appears to have been abandoned.)
Python for casual stuff -- it's powerful, the code doesn't become incomprehensible 6 months (or 6 hours) after writing it, and the design is coherent enough that returning to it after a short period of disuse doesn't involve too much painful swapping everything back into memory. Main downside: the usual Python implementation is really slow. (For some categories of calculation-intensive task, Numpy mitigates this. It's not too hard to extend Python in C or C++. There's an implementation of Python called PyPy that runs some code much faster, but is much more memory-hungry.)
I tend to say that Common Lisp is my overall favourite programming language, but I've written hardly any in the last decade or so. Admittedly that's mostly because in the last decade or so most of my programming has been done at work where language choice is usually determined by factors other than my general preferences. CL's pretty nice, though: it has roughly the usual set of useful datatypes found in modern languages, is highly extensible both syntactically and semantically (via "reader macros" and ordinary macros; C users should note that CL's macros are not at all like C's), has a reasonably capable standard library, and can be made to run pretty fast despite being dynamically typed. Downsides: the syntax is a bit clunky in some respects (e.g., there are hash-tables but no literal syntax for making them), some kinds of genericity we're used to nowadays -- e.g., being able to use the same code to do standard things to anything that "behaves like a sequence" -- aren't there, and because it's not a popular language nowadays there's much less chance that someone's already solved 90% of your problem than there is in, say, Python or Ruby or even C++.
I retain a soft spot for good ol' C, but reluctantly concede that these days you generally want to be using C++ rather than C for really performance-critical work. But C++ is a very unlovable language. (Recent versions have made a lot of things nicer, if you do things the "modern" way -- but most of the C++ code I deal with day to day has to run on very resource-limited embedded processors, which means we generally avoid a lot of the standard library, and has to be able to build with old compilers, which means a lot of those nicer friendlier features might as well not be there.) I have hopes that Rust may turn out to be a "better C++" but it's early days yet.
On the thankfully rare occasions when I need to write something Windows-GUI-ish, C# is actually a pretty good tool these days. In particular, LINQ is pretty cool (comparable to Python's comprehensions but uglier and more capable).
I still use Perl for exactly one purpose: one-liners in the shell for tasks just one notch up from what a single grep invocation will do.
For numeric stuff I regrettably tend to use MATLAB (which was well established at my current employer well before I got here). Python/Numpy would be a pretty good alternative -- as a programming language MATLAB is maybe one step above INTERCAL -- but isn't so convenient for interactive use, which is quite a big deal.
Things I have a constant sense of nagging guilt for not having looked at properly yet: Julia (might make a really good replacement for MATLAB/Python+Numpy), Haskell (clearly very powerful and elegant, and lots of language-theoretical stuff happens there), Go (clearly not in any way an interesting language, but for its niche -- pretty much anything you could call a "server" -- it seems well designed), Racket (seems like they've grown Scheme into something really capable, and maybe it should be taking CL's place in my heart).
I fear this post is kind of too open-ended and prone to language wars. I suggest a poll instead or a somewhat more focussed question.
What we've got from this question so far is some specific comments on merits and demerits of a bunch of languages. A poll wouldn't (necessarily) have given that. And so far there's not a lot of language-warring.
I agree that a more focused question might well meet AspiringRationalist's goals better, but as far as general discussion goes I don't see that the question s/he actually asked has done much harm.
I agree. But maybe my warning contributed to there being no war. Kind of self-defeating prophecy maybe?
Yup, could be.
I got tired of the lot of them, and set out to build my own. That was a rabbit hole.
Just out of curiosity, what design did you follow?
Mostly "functional"-style, in the sense of having algebraic data types and expressions as the chief syntactic construct, but with this system of subtyping and objects for the things like modules and closure types that actually need the existential type. I ended up writing my own type-inference algorithm, which I'm still formalizing in Coq now.
Rabbit. Hole.
I don't know that I have a "favorite" programming language.
What I use for getting everyday things done: Python, with a bit of shellscript for the really quick things. Why? Because I know it well. I learned Python years ago because it had libraries I needed, kept using it because it got the job done, and then worked for many years at an employer where it was one of the Officially Approved Languages.
What I mess around with, when I'm messing around with code recreationally: currently Elm. Why? Because functional reactive programming is a freaking awesome idea, and Elm makes it actually make sense. Also, whereas Python supports antigravity out of the box, Elm supports time travel.
What I would use if I needed to write code that would run fast and handle user traffic: Go. Why? Because it is efficient, safe (from buffer overflows and the like), and makes concurrency really easy. There's not really any such thing as high-performance code without concurrency these days. Safety matters a lot, too — the last project I wrote in Go was an SSH honeypot to log the usernames and passwords that attackers try. It helps that Go code is clear enough that I could actually read enough of the crypto libraries to have confidence that I wasn't going to regret it.
Other languages I like for one reason or another: Haskell and Lisp, for expressing two deeply contrary ideals on what programming is.
I prefer to use Ruby when possible, though I switch to Python (with numpy) for more math-heavy applications. Ruby's ability to chain methods, syntactic sugar, and larger amount of built-in methods makes programming much more fun and efficient than Python, where I'm constantly going back a word to write a new method and enclose what I just wrote in parentheses, or counting parentheses/brackets, which I don't really seem to need to do in Ruby. Python is still much more enjoyable to program in than most other languages, but compared to Ruby, it feels like programming backwards. I also prefer to use Ruby/Rails for prototyping and web development.
Prefer:
Tolerate:
Dislike (based on little experience):
Mixed:
Interested in:
My general rule of thumb is
I love the simplicity and power of Python and will use it to prototype proof of concepts (not so much GUI work - would use HTML or .NET for that). For me, Python really makes programming a lot of fun again and though it is slower, I haven't yet had the need to shell out to C code though I expect I will soon.
I'm currently developing Autocad extensions, so I work routinely in AutoLisp, but pure Lisp implementations are at best outdated. So I was very interested when Clojure came out. Now that I'm tackling video-games with HTML5/CSS/JavaScript, ClojureScript might become a very interesting alternative.
I was also very fascinated by Scheme's call-with-current-continuations, so I'm hoping they will implement in Clojure.
I think targeting the JVM (as Clojure does) makes that difficult. I may be way out of date, though.
Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
Machine learning. More speculatively, approximations to solomonoff induction.
I had this meme roaming around my mind ever since I was a child that a dripping faucet is a major waste of water (not sure where exactly I got it from), so I decided to Fermi estimate how much water it actually wastes. (The answer is left as an exercise to the reader.)
Hmm.
ROT13: V unir ab vqrn ubj zhpu jngre'f va gur nirentr qebcyrg, ohg gurl ybbx nobhg unys n pz npebff, fb yrg'f onyycnex vg nf n gragu bs n pp. Gubhfnaq PPf gb n yvgre, fb gra gubhfnaq qebcyrgf va bar. 86400 frpbaqf va n qnl, fb n snhprg qevccvat ng n qebc n frpbaq (cerggl snfg) vf jnfgvat nebhaq 10 yvgref n qnl, naq bar qevccvat ng n qebc rirel 10 frpbaqf (zber glcvpny sebz jung V erzrzore) vf jnfgvat nobhg n yvgre n qnl. Cerggl ybj rvgure jnl, pbzcnerq gb jung lbh'er fcraqvat ba fubjref, qvfujnfuvat, rgp.
That about what you came up with?
Yes. And this thing says that you're within a factor of 2.1 of the right answer.
Now, does anyone know how much water is needed to manufacture a replacement faucet? (this is surprisingly difficult to estimate)
Usually the proper fix for a leaking faucet is substantially more conservative than replacing the whole assembly.
I am looking for a website that presents bite-size psychological insights. Does anyone know such a thing?
I found the site http://www.psych2go.net/ in the past few days and I find the idea very appealing, since it is a very fast and efficient way to learn or refresh knowledge of psychological facts. Unfortunately, that website itself doesn't seem all that good since most of its feed is concerned with dating tips and other noise rather than actual psychological insights. Do you know something that is like it, but better and more serious?
Mindhacks was good.
Alternately, get used to reading textbooks - it really is pretty great.
I am reading textbooks. But that is something you have to make a conscious decision to do. I am looking for something that can replace bad habits. Instead of going to 9gag or tvtropes to kill 5 minutes, I might as well use a website that actually teaches me something, while still being interesting.
The important bit is that the information must be available immediately, without any preceding introductions, so that it is even worth it to visit the site for 30 seconds while you are waiting for something else to finish.
Mindhacks looks interesting and I will keep it in mind, so thanks for that suggestion. Unfortunately, it doesn't fit the role I had in mind because the articles are not concise enough for what I need.
Foreign language learning. 30 seconds seems too little, but a minute or so makes it worthwhile to visit a RSS reader in that language and read a limerick or two.
That sounds like it would work pretty well. I'm looking specifically for psychology facts, though.
I would recommend http://cogsci.stackexchange.com/. I find the community interaction conductive to learning.