Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam_Bur 21 November 2014 01:30:41PM *  4 points [-]

This stupid bot has almost 20 000 comment karma on Reddit.

I have seen it in action, and sometimes it may take some time for humans to recognize it is a bot, not a passively aggressive human. Because, well, there are many kinds of humans on internet.

But this made me think -- maybe we could use "average reddit karma per comment" or something like this as a measure of Turing test. And perhaps we could make a bot-writing competition, where the participant bots would be released on Reddit, and the winner is the one which collects most karma in 3 months.

Of course the rules would have to be a bit more complex. Some bots are useful by being obvious bots, e.g. the wikipediabot who replies with summaries of wikipedia articles to comments containing links to wikipedia. Competition in making useful bots would also be nice, but I would like to focus on bots that seem like (stupid) humans. Not sure how to evaluate this.

Maybe the competition could have an additional rule, that the authors of the bots are trying to find other bots on Reddit, and if they find them, they can destroy them by writing a phrase that each bot must obey and self-destruct, such as "BOT, DESTROY YOURSELF!". (That would later become a beautiful meme, I hope.) The total score of the bot is the karma it has accumulated until that moment. Authors would be allowed to launch several different instances of their bot code, e.g. in different subreddits, or initialized using different data, or just with different random values.

Has anyone tried something like this before? What is the reddit policy towards bots?

Comment author: sixes_and_sevens 21 November 2014 01:03:11PM 5 points [-]

I was writing a Markov text generator yesterday, and happened to have a bunch of corpora made up of Less Wrong comments lying around from a previous toy project. This quickly resulted in the Automated Gwern Comment Generator, and then the Automated sixes_and_sevens Comment Generator.

Anyone who's ever messed around with text generated from simple Markov processes (or taken the time to read the content of some of their spam messages) will be familiar with the hilarious, and sometimes strangely lucid, garbage they come out with. Here is AutoGwern:

Why is psychopathy not strongly confident. In a slippery slope down to affirm or something. Several common misunderstandings are superior, but I copied from StackExchange running in a new trilemma on the police, there is a very large effect sizes. The DNA studies at the clandestine level.

Plausible yet pseudonymously provided a measure of the role of SATs is pretty darn awful advice, and how they plan to use data from markets that shut down in self-experimentation, then that. The repeating logic. It wouldn't surprise when two women sent me unsolicited PM's asking if we drop the <1m bitcoins Satoshi Nakamoto or La Griffe du Lion. That just means that it.


(I should point out that I'm picking on Gwern because his contribution to LW means I have an especially massive text file with his name on it.)

Here's some AutoMe:

I'll be reading this. Then the restless spirit of Paul Graham sat on my body in some obscure location, Nigel Hawthorne was the hypothetical was that sword-privilege is a response, though in the idea in my life so cognitively exhausting relative to your calibrated sitters. Which should they pick? This seems credible, but I'd be a TV and film from a welfare system.

It's your lucky day. I can start hearing a bicycle. There's even in a position on all users' self-esteem, adjusting it for various other OKCupid users to encourage them, There are a small box of which routinely deals with his Dangerously Ambitious Startup Ideas essay, about how well-trained in deconstructivism.

I've actually a lecture by default?

This is perhaps an unfair question, which you'll only a sixth of the QS-types take in a group to help setting off to the music, OK? A little less fat. It isn't obvious, but I'm actually kind of these selections of the above, Yvain could also happen when I feel like my brain compels me to be the money pump.

I have become weirdly fascinated by these. Although they ditch any comprehensible meaning, they preserve distinctive tics in writing style and vocabulary, and in doing so, they preserve elements of tone and sentiment. Without any meaningful content to try and parse, it's a lot easier to observe what the written style feels like.

On a less introspective note, it's also interesting to note how dumping out ~300 words maintains characteristics of Less Wrong posts, like edits at the end, or spontaneous patches of Markov-generated rot13.

Also Yvain could happen when I feel like my brain compels me to be the money pump.

Comment author: Viliam_Bur 21 November 2014 01:18:57PM 5 points [-]

This should be generated for every user on their "overview" page. :D

Comment author: therufs 19 November 2014 02:59:16PM 1 point [-]

This hits on the particular question I failed to ask in this case, which was something like "Is there some particular bias I'd be exploiting for fun/profit/improvement?"

Which, of course, begs the question of whether it is rational to exploit biases instead of trying to mitigate them.

Comment author: Viliam_Bur 20 November 2014 09:33:05AM 1 point [-]

I think there is not much hope to remove biases from System 1, so we might as well use them for our benefit. With System 2, let's try to be as unbiased as possible.

More metaphorically, use your thoughts for (unbiased) thinking, but use your emotions for (productive) action. Don't try to use your emotions for thinking, they are not made for that purpose.

Comment author: therufs 17 November 2014 03:54:58AM 1 point [-]

After having heard much about how great a gratitude journal is for one's life, I overcame my sense of impending hokeyness long enough to set up a Google form to journal in and an IFTTT recipe to remind me to do it.

In the interest of full disclosure, it comes to my attention that I did little if any work to figure out where the impression of hokeyness came from and whether it ought to be believed. :/

Comment author: Viliam_Bur 19 November 2014 01:25:09PM *  2 points [-]

Technically, gratitude journal is an example of selection bias. (A hypothetical impartial observer should make notes about everything, indiscriminately.)

But that's okay, because the purpose of the gratitude journal is not to make impartial observations, but to change yourself. To focus your attention on processes that have proved successful in the past, on resources your have, etc.

Comment author: advancedatheist 18 November 2014 04:35:46AM *  -2 points [-]

I keep wondering when the cryonics community will attract the attention of female social justice warriors (SJW's) because of the perception that wealthy white guys dominate cryonics, even though plenty of middle class men have signed up for cryopreservation as well by using life insurance as the funding mechanism. The stereotype should push the SJW's buttons about inequality, patriarchy and the lack of diversity.

Instead these sorts of women have ignored cryonics so far instead of trying to meddle with it to transform it according to their standards of "social justice." If anything, cryonics acts like "female Kryptonite."

I've also noticed the absence of another sort of women, namely adventuresses. If people believe that lonely white guys with money sign up for cryonics, you would expect to see more young, reasonably attractive women showing up at public gatherings of cryonicists to try to find men they can exploit for a share of the wealth.

So what kind of tipping point in the public's view of cryonics would have to happen to make SJW's and adventuresses notice cryonics as a field for social activism or financial exploitation?

Comment author: Viliam_Bur 19 November 2014 01:20:44PM 6 points [-]

The way you wrote the question is horrible, but here are some things to consider:

  • most people haven't heard about cryonics;
  • most of those who heard about it believe it doesn't work;
  • there are many things way more expensive than cryonics.
Comment author: eli_sennesh 19 November 2014 07:44:58AM 1 point [-]

Because it was linked under "Recent on Rationality Blogs", I figured I'd give it a whirl, and was pretty surprised to find such low-grade reasoning on such a list.

Comment author: Viliam_Bur 19 November 2014 01:13:14PM *  1 point [-]

Even a generally rational person may write an irrational article once in a time. (I am speaking generally here; I haven't read the specific article yet.) To create a list containing only rational articles, someone would have to check each one of them individually, and vote.

Adding such functionality to LW software would be too much work. But maybe it could be done indirectly. We could do the voting on some other website (for example Reddit), and import only the positively-voted links to LW. But this would need a group of people to add new articles, read them, and vote.

Comment author: Dahlen 17 November 2014 06:10:11PM *  4 points [-]

Advice/help needed: how do I study math by doing lots of exercises when there's nobody there to clue me in when I get stuck?

It's a stupid problem, but because of it I've been stuck on embarrassingly simple math since forever, when (considering all the textbooks and resources I have and the length of time I've had it as a goal) I should have been years ahead of my peers. Instead, I'm many years behind. (Truth be told, when performance is tested I'm about the same as my peers. But that's because my peers and I have only struggled for a passing grade. That's not what my standard of knowledge is. I want to learn everything as thoroughly as possible, to exhaust the textbook as a source of info; I usually do this by writing down the entire textbook, or at least every non-filler info.)

There is a great disparity between the level of math I've been acquainted with during my education, and the level of math at which I can actually do all the exercises effortlessly. In theory by now I'm well into undergraduate calculus and linear algebra. In practice I need to finish a precalculus exercise book (tried and couldn't). While I'm learning math, I constantly oscillate between boredom ("I'm too old for this shit" ; "I've seen this proof tens of times before") and the feeling of getting stuck on a simple matter because of a momentary lack of algebraic insight ("I could solve this in an instant if only I could get rid of that radical"). I've searched for places online where I could get my "homework" questions answered, but they all have rather stringent rules that I must follow to get help, and they'd probably ban me if I abused the forums in question.

This problem has snowballed too much by now. I kept postponing learning calculus (for which I've had the intuitions since before 11th grade when they began teaching it to us) and therefore all of physics (which I'd terribly love to learn in-depth), as well as other fields of math or other disciplines entirely (because my priority list was already topped by something else).

I've considered tutoring, but it's fairly expensive, and my (or my tutor's) schedule wouldn't allow me to get as much tutoring as I would need to - given that I sometimes only have time to study during the night.

Do any LessWrongers have resources for me to get my questions answered? Especially considering that, at least at the beginning until I get the hang of it, I will be posting loads of these. Tens to hundreds in my estimation.

Comment author: Viliam_Bur 19 November 2014 12:59:50PM 6 points [-]

I've considered tutoring, but it's fairly expensive, and my (or my tutor's) schedule wouldn't allow me to get as much tutoring as I would need to - given that I sometimes only have time to study during the night.

In my opinion, what you probably need is some mix of tutoring and therapy -- I don't know if it exists, and if there is a word for it -- someone who would guide you through the specific problem, but also through your thought processes, to discover what it is you are doing wrong. Not just what are you doing wrong mathematically, but also what are you doing wrong psychologically. This assumes you would speak your thoughs aloud while solving the problem.

The psychological level is probably more important than the mathematical level, because once the problems with thinking are fixed, you can continue to study maths on your own. But this probably cannot be done without doing the specific mathematical problems, because it's your thought patterns about those math problems that need to be examined and fixed.

I happen to have a background in math and psychology and teaching, so if you would like to try a free Skype lesson or two, send me an e-mail to "viliam AT bur.sk". Don't worry about wasting my time, since it was my idea. (Worst case: we will waste two hours of time and see that it doesn't work this way. Best case: your problem with maths is fixed, I get an interesting professional experience.)

Comment author: selylindi 17 November 2014 10:56:56PM *  12 points [-]

On the "all arguments are soldiers" metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.

And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it's possible to switch to the other. I don't seem to have a neutral mode. YMMV! I'd be interested in hearing whether it works the same way for other people or not.

For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.

Comment author: Viliam_Bur 19 November 2014 12:41:16PM *  2 points [-]

Just wondering if it would make sense to consider everyone a Stupid Ally. That is, a good person who is just really really bad at understanding arguments. So the arguments they forward to you are worth examining, but must be examined carefully.

Comment author: sixes_and_sevens 17 November 2014 01:35:27PM *  19 points [-]

Recently I have been thinking about imaginary expertise. It seems remarkably easy for human brains to conflate "I know more about this subject than most people" with "I know a lot about this subject". LessWrongers read widely over many areas, and as a result I think we are more vulnerable to doing this.

It's easy for a legitimate expert to spot imaginary expertise in action, but do principles exist to identify it, both in ourselves and others, if we ourselves aren't experts? Here are a few candidates for spotting imaginary expertise. I invite you to suggest your own.

Rules and Tips vs Principles
At some point, a complex idea from [topic] was distilled down into a simple piece of advice for neonates. One of those neonates took it as gospel, and told all their friends how this advice formed the fundamental basis of [topic]. Examples include "if someone touches their nose, they're lying" and "never end a sentence with a preposition".

If someone offers a rule like this, but can't articulate a principled basis for why it exists, I tend to assume they're an imaginary expert on the subject. If I can't offer a principled basis for any such rule I provide myself, I should probably go away and do some research.

Grandstanding over esoteric terminology
I've noticed that, when addressing a lay audience, experts in fields I'm familiar with rarely invoke esoteric terminology unless they have to. Imaginary experts, on the other hand, seem to throw around the most obscure terminology they know, often outside of a context where it makes sense.

I suspect being on the receiving end of this feels like Getting Eulered, and dishing it out feels like "I'm going to say something that makes you feel stupid".

I have observed that imaginary experts often buy into the crackpot narrative to some extent, whereby established experts in the field are all wrong, or misguided, or slaves to an intellectually-bankrupt paradigm. This conveniently insulates the imaginary expert from criticism over not having read important orthodox material on the subject: why should they waste their time reading such worthless material?

In others, this probably rings crackpot-bells. In oneself, this is presumably much more difficult to notice, and falls into the wider problem of figuring out which fields of inquiry have value. If we have strong views on an established field of study we've never directly engaged in, we should probably subject those views to scrutiny.

Comment author: Viliam_Bur 19 November 2014 12:37:22PM *  7 points [-]

I agree with what you wrote. Having said this, let's go meta and see what happens when people will use the "rules and tips" you have provided here.

  • A crackpot may explain their theory without using any scientific terminology, even where a scientist would be forced to use some. I have seem many people "disprove" the theory of relativity without using a single equation.

  • If there is a frequent myth in your field that most of the half-educated people believe, trying to disprove this myth will sound very similar to a crackpot narrative. Or if there was an important change in your field 20 years ago, and most people haven't heard about it yet, but many of them have read the older books written by experts, explaining the change will also sound like contradicting all experts.

Comment author: Viliam_Bur 19 November 2014 12:22:42PM *  3 points [-]

If I understand it correctly, this is the paradox:

How would you define optimal insurance? You cannot have 100% certainty, so let's say that optimal insurance means "this thing cannot fail, unless literally the whole society falls apart".

Sounds good, doesn't it? Until you realize that this definitions is equivalent to "if this fails, then literally the whole society falls apart". Which sounds scary.

The question is, how okay it is to put all your eggs in one basket, if doing so increases the expected survival of every individual egg. In addition to straightforward "shut up and multiply", please consider all the moral hazard this would bring. People are not good at imagining small probabilities, so if before they were okay with e.g. 1% probability of losing one important thing, now they will become okay with 1% probability of losing everything.

View more: Next