I recently remarked that the phrase "that doesn't seem obvious to me" is good at getting people to reassess their stated beliefs without antagonising them into a defensive position, and as such it was on my list of "magic phrases". More recently I've been using "can you give a specific example?" for the same purpose.
What expressions or turns of phrase do you find particularly useful in encouraging others, or yourself, to think to a higher standard?
depersonalizing the argument is something I've had great success with. Steelmanning someone's argument directly is insulting, but steelmanning it by stating that it is similar to the position of high status person X, who is opposed by the viewpoint of high status person Y allows you to discuss otherwise inflammatory ideas dispassionately.
Comment author:satt
04 July 2013 09:38:38PM
3 points
[-]
I've experimented with repersonalizing arguments: instead of challenging someone else for holding a belief, I direct the challenge at myself by putting their argument in my own mouth and saying what contrary evidence prevents me from believing it.
Someone else: You know that global warming business is a load of rubbish, right? Isn't real.
Me: That's not obvious to me. There are records of global average surface temperatures going back 150 years or so.
Someone else: Well, they can't know what the temperature was like before then.
Me: I'm sometimes inclined to think so, but then I'd have to contend with the variety of records based on tree rings, ice cores, and boreholes which go back centuries or millennia.
Comment author:shminux
02 July 2013 08:15:27AM
12 points
[-]
This is not quite what you want, but if you are a grad student giving a talk and a senior person prefaces her question to you with "I am confused about...", you are likely talking nonsense and they are too polite to tell you straight up.
Which reminds me of my born-again Christian mother - evangelicals bend over backwards to avoid dissing each other, so if you call someone "interesting" in a certain tone of voice it means "dangerous lunatic" and people take due warning. (May vary, this is in Perth, Australia.)
I like this, and also "I don't quite understand why [X]", which puts them in the pleasant position of explaining to me from a position of superiority--or sometimes realizing that they can't.
Comment author:Viliam_Bur
06 July 2013 10:55:58AM
1 point
[-]
I guess this only works on people who feel friendly. Making them also feel superior... now they owe you a decent explanation.
A hostile person could find other way to feel superior, without explanation. For example, they could say: "Just use google to educate yourself, dummy!"
Comment author:letter7
01 July 2013 08:54:06PM
14 points
[-]
There's something that happens to me with an alarming frequency, something that I almost never (or don't remember) see being referenced (and thus I don't know the proper name). I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.
Sometimes it also happens in a less alarming way, but still bad: for instance, when I'm reading something that is deliberately teaching me an important piece of knowledge (as in, I already know whathever is in this text IS important) I happen to go through it without questioning anything, just "accepting" it and a few moments later it suddenly comes down on me when I'm ahead: "Wait... what, did he just say 2 pages ago that thermal radiation does NOT need matter to propagate?" and I have again to go back and check that I was not crazy.
While I don't know the name of this effect, I have asked some acquantainces of mine about that, while some agreed that they have it others didn't. I would like very much to eliminate this flaw, anybody knows what I could do to train myself not to do it or at least the correct name so I can research more about it?
Comment author:moreati
01 July 2013 10:34:45PM
4 points
[-]
If it's material you want to/are required to learn from try taking notes as you read the material, to force yourself to recall it in your own terms/language.
If it's just recreational/online reading try increasing the font size/spacing or decreasing the browser width, or using a browser extension like readability. Don't scroll with the scroll bar or the mouse wheel - use pg up/pg down to make it easier to keep your position.
Comment author:tim
02 July 2013 07:38:41PM
1 point
[-]
In the same vein, I get easily distracted when reading text and the ability to click around, select and deselect text that I'm reading helps me to stay engaged.
Writing that out it sounds like it would be super distracting but its not (for me). Possibly related to the phenomenon where some people work better with noise in the background rather than in silence. Clicking around might help maintain a minimum level of stimulation while reading.
I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.
I do this all the time. I have seen it referred to in literature (a character reading a page three times before realising he can't take it in, as a way to show that he's extremely distracted), but that's not quite the same as just zoning out.
Comment author:shminux
01 July 2013 10:14:27PM
2 points
[-]
Probably automaticity is what you are looking for. I am not sure how to force one's mind to attend to a repetitive task. One trick for avoiding reading automaticity is to paraphrase and check for potential BS every paragraph or so.
Comment author:letter7
01 July 2013 11:02:39PM
*
3 points
[-]
Indeed it's something along those lines, however, in the article it's represented in a positive light, where
a skilled reader, multiple tasks are being performed at the same time such as decoding the words, comprehending the information, relating the information to prior knowledge of the subject matter, making inferences, and evaluating the information's usefulness to a report he or she is writing
My problem is that, somehow, I do that, but without comprehending anything. The article linked to an interesting program in Australia, though, QuickSmart. It's aimed at middle students, but I think I could perhaps benefinit from it.
Comment author:aelephant
01 July 2013 11:08:16PM
1 point
[-]
I can't remember where I read it, but I remember hearing that in order to really understand an argument, you have to take a leap of faith & accept all of the propositions & conclusions in that argument. If you don't, you will be automatically & subconsciously strawmanning it. After you've exposed yourself to the whole idea, you can go back & look at it critically. I have no idea if this is BS & wish I could track down where I came across it. Cheers to any help.
Comment author:bentarm
01 July 2013 06:46:21PM
*
14 points
[-]
So, everyone agrees that commuting is terrible for the happiness of the commuter. One thing I've struggled to find much evidence about is how much the method of commute matters. If I get to commute to work in a chauffeur driven limo, is that better than driving myself? What if I live a 10 minute drive/45 minute walk from work, am I better off walking? How does public transport compare to driving?
I suspect the majority of these studies are done in US cities, so mostly cover people who drive to work (with maybe a minority who use transit). I've come across a couple of articles which suggest cycling > driving here and conflicting views on whether driving > public transit here but they're just individual studies - I was wondering if there's much more known about this, and figured that if there is, someone here probably knows it. If no one does, I might get round to a more thorough perusal of the literature myself now I've publicly announced that the subject interests me.
I think it entirely depends on what you do during your commute.
A lot of drivers who drive during rush hour feel stress because they get annoyed at the behavior of other drivers. That's terrible for the happiness of the commuter.
Traveling via public transport also gives you plenty of opportunities to get upset over other people. It provides you the opportunity to get upset if the bus comes a bit late.
If you travel via public transport you can do tasks like reading a book that you can't do while driving a car or cycling.
Does anyone else experience the phenomenon of perceiving the duration of a commute to be shorter when the distance is shorter? For example, it feels like it takes less time or is more enjoyable to walk 3/4 mile in 15 minutes than to travel a few miles by subway in 15 minutes. I think its because being close in proximity makes me feel like "Hey I'm basically there already" where as traveling a few miles makes me think "I'm not even in the same neighborhood yet" even though both of these take me the same amount of time.
Comment author:Viliam_Bur
02 July 2013 07:00:43AM
8 points
[-]
For me an important aspect is feeling of control. 15 minutes of walking is more pleasant that 10 minutes of waiting for bus and 5 minutes of travelling by bus.
Comment author:Kaj_Sotala
02 July 2013 07:04:39AM
*
9 points
[-]
Every now and then, I decide that I don't have the patience to wait 10 minutes for a bus that would take me to where I'm going in 10 minutes. So I walk, which takes me an hour.
I had the opposite effect recently - I thought that I'd save time by waiting for the bus, but it turns out that walking gets me to work from the train about 12 minutes sooner. Coming back, I don't have a ridiculous wait, so I still take the bus.
I could do even better if I got some wheels of some sort involved. Maybe it's time to take up skateboarding. Scooter? Bike seems like it would be too cumbersome, even if I can get one that folds up.
Comment author:spqr0a1
13 July 2013 04:02:40PM
0 points
[-]
If the commute is mostly flat, consider Freeline skates. They take up much less space than any of the mentioned wheels; the technique is different from skateboarding but the learning curve isn't any worse.
Comment author:tut
02 July 2013 08:29:19AM
*
7 points
[-]
Not in general, but I recognize your example. Walking is pleasant and active and allows me to think sustained thoughts, so it makes time 'pass' quickly. Whereas riding the subway is passive and stressful and makes me think many scattered thoughts in short time, so it makes time 'pass' slowly, making the ride seem longer. Also, if you walk somewhere in 15 minutes that probably takes about 15 minutes, but if you ride the subway for 15 minutes that probably takes more like half an hour from when you leave home to when you get to your goal.
Comment author:[deleted]
05 July 2013 10:42:33AM
*
4 points
[-]
More generally, I've noticed I tend to underestimate how much time it passes when I'm directly controlling how fast I'm going (climbing stairs, driving on an open road, reading) and overestimate it when I'm not (using an elevator, driving in congested traffic, watching a video).
Short-distance public transport is an exception: once I'm on the bus, it feels like it takes 5 minutes to get from home to the university, but it actually takes 20.
Comment author:Camaragon
08 July 2013 07:47:15AM
1 point
[-]
I download loads of music and audiobook and books (though it's more bothersome to read while moving) and listen to them on my commute to work, it takes me around 45 minutes commute to get to work via train system and it takes the same time to get back home. Doing this, I totally don't mind the commute. Look forward to it even since It was the only time I get to read or listen to anything.
Comment author:ESRogs
02 July 2013 11:16:15PM
13 points
[-]
I've just noticed that the Future of Humanity Institute stopped receiving direct funding from the Oxford Martin School in 2012, while "new donors continue to support its work."
http://www.oxfordmartin.ox.ac.uk/institutes/future_humanity
Comment author:gjm
01 July 2013 10:28:09PM
12 points
[-]
Hey komponisto (and others interested in music) -- if you haven't already seen Vi Hart's latest offering, Twelve Tones, you might want to take a look. Even though it's 30 minutes long.
(I don't expect komponisto, or others at his level, will learn anything from it. But it's a lot of fun.)
I second the recommendation. I found it interesting that I enjoyed it so much despite learning almost nothing at all. Everything in the video was stuff I'd heard or thought about before, but seeing it presented in a unified, artistic, humorous fashion was very entertaining.
I had no idea that the purpose of twelve tone was to teach people how to decontextualize musical sounds. Is listening to such music more valuable than meditation?
Comment author:elharo
05 July 2013 01:00:40PM
10 points
[-]
A Big +1 to whoever modified the code to put pink borders around comments that are new since the last time I logged in and looked at an article. Thanks!
You may not have noticed when you posted this, but the formatting of your post didn't show up like I think you may have wanted, with the result that it's hard to read. (If you're wondering, it takes 2 carriage returns to get a line break out.)
If you intended the comment to look like it does, I apologize for bothering you.
Comment author:Ratcourse
02 July 2013 02:14:57PM
6 points
[-]
How do you correct your mistakes?
For example, I recently found out I did something wrong at a conference. In my bio, in areas of expertise I should have written what I can teach about, and in areas of interest what I want to be taught about. This seems to maximize value for me.
How do I keep that mistake from happening in the future? I don't know when the next conference will happen. Do I write it on anki and memorize that as a failure mode?
More generally, when you recognize a failure mode in yourself how do you constrain your future self so that it doesn't repeat this failure mode? How do you proceduralize and install the solution?
For a while I was in the habit of putting my little life lessons in the form of Anki cards and memorizing them. I would also memorize things like conflict resolution protocols and checklists for depressive thinking. Unfortunately it didn't really work, in the sense that my brain consistently failed to recall the appropriate knowledge in the appropriate context.
I tried using an iOS app caled Lift but I found it difficult to use and not motivating.
I also tried using an iOS app called Alarmed to ping me throughout the day with little reminders like "Posture" and "Smile" and "Notice" to improve my posture, attitude, and level of mindfulness, respectively. This worked better but I eventually got tired of my phone buzzing so often with distracting, non-critical information and turned off the reminders.
My very first post on LessWrong was about proceduralizing rationality lessons, I think it's one of the biggest blank spots in the curriculum.
Comment author:maia
03 July 2013 01:54:08AM
3 points
[-]
I'm not sure this applies to your particular situation, but a general solution for proceduralizing behaviors that was discussed at minicamp (and which I'd actually done before) is: Trigger yourself on a particular physical sensation, by visualizing it and thinking very hard about the thing you want yourself to remember. So an example would be if you want to make sure you do the things on your to-do list as soon as you get home, spend a few minutes trying to visualize with as much detail as you can what the front door of your house looks like, and recall what it feels like to be stepping through it, and think about "To do list time!" at the same time. (Or if you have access to your front door at the time you're trying to do this, actually stepping through your front door repeatedly while thinking about this might help too.)
And if there's some way to automate it, then of course that's ideal, though you said you don't know when the next conference will happen so that's more difficult.
Or another kind of automation: maybe you could save the bio you wrote in a Word document, and write a reminder in it to add the edits you want... or just do them now, and save the bio for future use. Then all you have to remember is that you wrote your bio already. Which is another problem, but conceivably a smaller one: I don't know about your hindbrain, but upon being told it had to write a bio, mine would probably be grasping at ways to avoid doing work, and having it done already is an easy out.
For a problem like this, remembering for something rare in the indefinite future, the important thing is to remember at that time that you know something. At that point, if you've put it in a reasonable place, you can find it. It seems to me that the key problem is the jump from "have to write a bio" to "how to write a bio," that is, making sure you pause and think about what you know or have written down somewhere. Some people claim success with Anki here, but it doesn't make sense to me.
What most people do with bios is that they reuse them, or at least look at the old one whenever needing a new one. As Maia says, if you write an improved bio now, you can find it next time, when you look for the most recent version. But that doesn't necessarily help remember why it was an improvement. If you have a standard place for bios, you can store lots of variants (lengths, types of conferences, resume, etc), along with instructions on what distinguishes them. But I think what most people do is search their email for the last one they submitted. If you can't learn to look in a more organized place, you could send yourself an email with all the bios and the instructions, so that it comes up when you search email.
Comment author:gwern
13 July 2013 11:31:47PM
3 points
[-]
For some strange reason, it rather resembled the Hogwarts School of Witchcraft and Wizardry. So whoever did this knew about Hermione Granger and the Burden of Responsibility. That wasn’t much comfort...Someone was mocking him, or at least mocking his self-insert as Godric Gryffindor. The alicorn pony he had become sighed.
Comment author:Username
02 July 2013 06:50:13PM
*
13 points
[-]
Posting here rather than the 'What are you working on' thread.
3 weeks ago I got two magnets implanted in my fingers. For those who haven't heard of this before, what happens is that moving electro-magnetic fields (read: everything AC) cause the magnets in your fingertips to vibrate. Over time, as nerves in the area heal, your brain learns to interpret these vibrations as varying field strengths. Essentially, you gain a sixth sense of being able to detect magnetic fields, and as an extension, electricity. It's a $350 superpower.
The guy who put them in my finger told me it will take about six months before I get full sensitivity. So, what I'm doing at the moment is research into this and quantifying my sensitivity as it develops over time. The methodology I'm using is wrapping a loop of copper wire around my fingers and hooking it up to a headphone jack, which I will then plug into my computer and send randomized voltage levels through. By writing a program so I can do this blind, I should be able to get a fairly accurate picture of where my sensitivity cutoff level is.
One thing I'm stuck on is how to calculate the field strength acting on my magnets. Getting the B field for a solenoid is trivial, but with a magnetic core I'm sure it throws everything out of whack. If anyone has any links to the physics of how to approach that, I'd be much obliged.
And if you're curious about what it's like so far to have magnets in your fingers, feel free to ask.
Comment author:drethelin
02 July 2013 07:29:57PM
*
10 points
[-]
"superpower" is overstating it. Picking up paperclips is neat and being able to feel metal detectors as you walk through them or tell if things are ferrous is also fun but it's more of just a "power" than a superpower. It also has the downside of you needing to be careful around hard-drives and other strong magnets. On net I'm happy I got them but it's not amazing.
Comment author:gwillen
02 July 2013 11:54:10PM
3 points
[-]
FYI, there's no need to be careful around hard drives (except for your own safety, since they're large chunks of metal your magnet will stick to.) The platters of a modern hard drive are too high-coercivity and too well-shielded for even a substantial neodymium magnet (bigger than you can fit in a fingertip) to affect them.
Comment author:wedrifid
09 July 2013 10:59:46AM
6 points
[-]
Credit cards, on the other hand.
Great thinking! Once you have fully developed and trained your superpower sensitivity you can read the cards by merely brushing your hands past someone's wallet!
Comment author:Username
02 July 2013 07:38:06PM
2 points
[-]
I'd mostly agree with that. After I finish my current project though I have some more in mind about using them as input methods, so for me they're as much toys I can experiment with as anything else.
Comment author:wadavis
08 July 2013 08:22:07PM
2 points
[-]
Do you notice the accumulation of ferrous, for the lack of a better word, dust fragments?
My magnets I have for misc. projects at home quickly pick up a collection of small fragments, but maybe my world is just to closely tied to steel fabrication shops.
Comment author:Username
08 July 2013 09:30:53PM
*
1 point
[-]
Not yet, though I haven't done any metalwork since I got the magnets.
This was one of the questions I asked the guy who put them in, since I'll be running into this eventually. He said that this was one of his concerns going into getting his own, as he does a lot of work in a shop, but that he has found that iron and steel filings haven't been a problem.
Comment author:Username
03 July 2013 04:39:09PM
4 points
[-]
Besides telling if a device is live or not, not that I know of. The one major issue is that you can't have an MRI, although if I'm in a situation where I can't tell a doctor that I have them, magnets being ripped out of my fingers is the least of my worries. If need be, I could have a doctor make a small incision and take them out. And I do have to be careful not to hold on to powerful magnets for too long, or it will crush the skin in between the two magnets. Other than that though, there's no real downside. They're off to the side so it doesn't affect my grip, and once my skin finishes healing they'll be unnoticeable.
The upside for me is the qualia of sensing emfs and having them as toys to play with. I treated the decision like getting a tattoo, where my personal rule is I have to love a design for a continuous year before getting it. I haven't settled on a design long enough to get a tattoo, but I had planned on getting magnets for about a year and a half so I went ahead and did it.
Comment author:Fhyve
03 July 2013 06:13:52AM
*
4 points
[-]
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don't naturally have principles of correct reasoning, we just do what intuitively seems right
This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn't be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.
This sounds like you think of them as mooks you want to show the light of enlightenment to. The sort of clever mathy people you want probably don't like to think of themselves as mooks who need to be shown the light of enlightenment. (This also might be sort of how I feel about the whole rationalism as a thing thing that's going on around here.)
That said, actually being awesome for your target audience's values of awesome is always a good idea to make them more receptive to looking into whatever you are doing. If you can use your rationalism powers to achieve stuff mathy university people appreciate, like top test scores or academic publications while you're still an undergraduate, your soapbox might be a lot bigger all of a sudden.
Then again, it might be that rationalism powers don't actually help enough in achieving this, and you'll just give yourself a mental breakdown while going for them. The math-inclined folk, who would like publication writing superpowers, probably also see this as the expected result, so why should they buy into rationality without some evidence that it seems to be making people win more?
Comment author:Fhyve
03 July 2013 10:06:45AM
-2 points
[-]
To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.
Comment author:elharo
05 July 2013 01:08:11PM
*
2 points
[-]
Taboo "rationalist". That is, don't make it sound like this is a group or ideology anyone is joining (because, done right, it isn't.)
Discuss, as appropriate, cognitive biases and specific techniques. E.g. planning fallacy, "I notice I am confused", "what do you think you know and why do you think you know it?", confirmation bias, etc.
Tell friends about cool books you've read like HPMoR, Thinking Fast and Slow, Predictably Irrational, Getting Things Done, and so forth. If possible read these books in paper (not ebooks) where your friends can see what you're reading and ask you about them.
Comment author:Viliam_Bur
03 July 2013 12:39:44PM
2 points
[-]
The problem with rationality is that unless you are at some level, you don't feel like you need to become more rational. And I think most people are not there, even the smart ones. Seems to me that smart people often realize they miss some specific knowledge, but they don't go meta and realize that they miss knowledge-gathering and -filtering skills. (And that's the smart people. The stupid ones only realize they miss money or food or something.) How do you sell something to a person who is not interested in buying?
Perhaps we could make a selection of LW articles that can be interesting even for people not interested in rationality. Less meta, less math. The ones that feel like "this website could help me make more money and become more popular". Then people become interested, and perhaps then they become interested more meta -- about a culture that creates this kind of articles.
(I guess that even for math-inclined people the less mathy articless would be better. Because they can find math in thousand different places; why should they care specifically about LW?)
How about bringing up specific bits of rationality when you talk with them? If they talk about plans, ask them how much they know about how long that sort of project is likely to take. If they seem to be floundering with keeping track of what they're thinking, encourage them to write the bits and pieces down.
If any of this sort of thing seems to register, start talking about biases and/or further sources of information.
This is a hypothetical procedure-- thanks for mentioning that The Simple Truth isn't working well as an introduction.
Comment author:Viliam_Bur
01 July 2013 07:48:54PM
*
10 points
[-]
I noticed a strategy that many people seem to use; for lack of a better name, I will call it "updating the applause lights". This is how it works:
You have something that you like and it is part of your identity. Let's say that you are a Green. You are proud that Greens are everything good, noble, and true; unlike those stupid evil Blues.
Gradually you discover that the sky is blue. First you deny it, but at some moment you can't resist the overwhelming evidence. But at that moment of history, there are many Green beliefs, and the belief that the sky is green is only one of them, although historically the central one. So you downplay it and say: "All Green beliefs are true, but some of them are meant metaphorically, not literally, such as the belief that the sky is green. This means that we are right, and the Blues are wrong; just as we always said."
Someone asks: "But didn't Greens say the sky is green? Because that seems false to me." And you say: "No, that's a strawman! You obviously don't understand Greens, you are full of prejudice. You should be ashamed of yourself." The someone gives an example of a Green that literally believed the sky is green. You say: "Okay, but this person is not a real Green. It's a very extreme person." Or if you can't deny it, you say: "Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green." And in some sense, you are right. (And the Blues are wrong. As it has always been.)
To be specific, I have several examples in my mind; religion is just one of them; probably any political or philosophical opinion that had to be updated significantly and needs to deny its original version.
My strategy is to avoid conversations of this form entirely by default. Most Greens do not need to be shown that the belief system they claim to have is flawed, and neither do most Blues. Pay attention to what people do, not what they say. Are they good people? Are they important enough that bad epistemology on their part directly has large negative effects on the world? If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?
If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?
I'm really going to try and remind myself of this more often. Most of the time the answers are "yes" and "no" and points are rarely won for pointing out bad epistemology.
Comment author:TimS
01 July 2013 08:24:07PM
6 points
[-]
Yes, like moving-the-goalposts, this is an annoying and dishonest rhetorical move.
Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green.
Suppose some Green says:
Yes, intellectual precursors to the current Green movement stated that the sky was literally Green. And they were less wrong, on the whole, then people who believed that the sky was blue. But the modern intellectual Green rejects that wave of Green-ish thought, and in part identifies the mistake as that wave of Greens being blue-ish in a way. In short, the Green movement of a previous generation made a mistake that the current wave of Greens rejects. Current Greens think we are less wrong than the previous wave of Greens.
Problematic, or reasonable non-mindkiller statement (attacking one's potential allies edition)?
How much of that intuition is driven by the belief that Bluism is correct. If we change the labels to Purple (some Blue) and Orange (no Blue), does the intuition change?
Comment author:DSherron
01 July 2013 08:51:00PM
3 points
[-]
If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.
For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they're just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.
Comment author:JoshuaZ
01 July 2013 11:26:35PM
*
2 points
[-]
There's a further complicating factor: often when this happens, both modern Blues and Greens won't exactly correspond to historical Blues and Greens even though both are using the same terms. Worse, when the entire region of acceptable social policy has changed, sometimes an extreme Green or Blue today might be what was seen as someone of the other type decades ago.
Comment author:TimS
02 July 2013 01:00:57AM
*
1 point
[-]
Yes, the first wave of a movement may have many divergent descendents, which end up on different sides of a current political dispute. And the direct-est descendent might be on the opposite side of the political divide from what we would predict the first-wave proponents would adopt. But for that to happen, there needs to be significant passage of time.
By contrast, if the third wave of a movement cannot point to an immediately prior second wave that actually believed the position criticized (and which the third wave has already rejected), then Villiam_Bur's moving-the-goalposts criticism has serious bite, to the point that an outsider probably should not accept the third wave as genuinely interested in rational discussion or true beliefs.
Comment author:JoshuaZ
02 July 2013 01:29:35AM
3 points
[-]
And here we were having a very nice discussion without pointing out any potentially controversial/mindkilling examples. Using the phrasing of second and third wave doesn't make it less subtle or less potentially mindkilling.
In the specific case which you are not so obliquely referencing, there's a pretty strong argument that much of thirdwave feminism has strands from first and second wave, while also agreeing on the most basic premises.
It is also worth noting in this context, that movements (wherever they are politically) aren't in general after rational discussion or true beliefs but at accomplishing specific goal sets. You will in any diverse movement find some strains that are more or less interested in rational discussion, but criticizing a movement for its failure to embody rationality is not by itself a very useful criticism.
Comment author:JoshuaZ
02 July 2013 06:04:01PM
4 points
[-]
I'm pretty sure that's what TimS was talking about given his use of the phrases "second wave" and "third wave". It is especially clear because if one was going to be talking about a generic example and using the term wave, one would in the same context have likely discussed the first wave v. the second wave. The off-by-one only makes sense in that specific historical context.
Oppositely, the second and third waves immediately screamed 'feminism' to me, but I couldn't assemble the rest of the analogy. The third wave has plenty of legitimate differences and similarities with both the first and second waves. I'm still not sure what TimS was getting at.
It is the big obvious current example where the ideological battle is between "second wave" and "third wave" and the first wave is barely mentioned. I encounter it in relation to the UK social justice Twittersphere, which is tangential to the more Kankri Vantas stretches of Tumblr. (Or, more accurately, the Porrim Maryam stretches.)
Edit: Can anyone think of another field described as having numbered waves where the battle was between second and third?
Comment author:Jack
03 July 2013 03:52:43PM
4 points
[-]
Ideologies and theo-philosophical schools are rarely if ever defined precisely enough to exclude true facts about the world or justifications for genuinely good ideas. They're more collections of rules of thumb, methods, technical terms and logics. If mathematically formulated scientific theories are under-determined then ideologies are so, but ten-fold. The problem of inferential distance when it comes to worldviews isn't really about shear decibels of information that need to be communicated. It's that the interlocutors are playing different games and speaking different languages. And I suspect most deconversions are more like picking up a new language and forgetting your old one, than they are the product of repeated updates based on the predictive failures of the old ideology/religion. It's a pseudo-rational process which is why it doesn't reliably occur in just one direction.
Back to your point: since people have egos, memetic complexes usually have self-perpetuating features and applause lights don't constrain future experience it makes sense that if anything is held constant it will be Greens being really sure they are right. That's non-optimal and definitely irksome to people like all of us. It's inefficient because we're spending a resources on constructing post-hoc justifications for how the real Green answer is the true one and the corrections to our model may not be more curve-fitting. That is, what ever beliefs and assumptions that led the Greens to be wrong in the first place may still be in place. Plus, it is kind of creepy in a "we've always been at war with Eurasia" kind of way.
But on the other hand it is sort of okay, right? At least they're updating! You can think of academic departments of philosophy, religion, law and humanities as just the cost of doing business to mollify our egos as we change our minds. And changing people's minds this way is almost certainly much easier than making them convert to the doctrine of the hated enemy and engage in extend self-flagellation. It's a line of retreat.
Making the modern Green cop to the literal beliefs of her intellectual ancestor seems like an exercise in scoring points, not genuine persuasion. Who needs credit? The curve fitting is still an issue but you might be better off trying to make room for better beliefs and assumptions within the context of Green thought. Especially since it isn't obvious the opposing movement did anything other than get lucky.
A few out-there scholars think Descartes was an atheist. He almost certainly wasn't. But there is a reason they suspect him even though much of the Meditations is an extended argument for the existence of God. The thing is that the practical upshot of his non-empirical argument for God is that we should completely abandon the Christian-Aristotelian-Scholastic tradition and use our senses to discover what the world is really like. "The sky is certainly Green and this proves that the ideal method for discovering the color of things is visual examination and use of a spectrometer."
Ideological multi-lingualism is a crucial skill; I'd like to hear ideas for cultivating it.
Comment author:TimS
01 July 2013 08:26:35PM
*
6 points
[-]
Let's avoid object level examples until we resolve how to distinguish this dishonest rhetorical move from honest updates on the low validity of prior arguments now abandoned. Otherwise, we get bogged down in mindkiller without any general insight into how to be more rational.
Comment author:mstevens
02 July 2013 10:32:59AM
1 point
[-]
I think there's a related rhetorical trick that's something like redefining the applause lights, or brand extension.
Greens believe the sky is green. I want them to believe the entire world is green. I will use their commitment to sky greeness and just persuade them it means something slightly different.
Clouds are kind of like the sky so should really be considered green if you're being fair about things. And rain is in the sky, who are you to say it's not green? Rain falls on the ground, which is therefore also part of the sky.
After a while, you can persuade people that, since the sky is green, obviously rocks are green.
This explanation isn't great but more practical examples are somewhat mindkilling.
Comment author:[deleted]
01 July 2013 08:38:23PM
*
1 point
[+]
(1
child)
Comment author:[deleted]
01 July 2013 08:38:23PM
*
1 point
[-]
Some selection effects: I wonder if the perceived solidarity of most identity-heavy groups is due to vague language that easily facilitates mind projection within the group. Surviving communities will have either reduced their exposure to fracturing forces, or drifted towards more underspecified beliefs as a result of such exposure. I think religious strains fall very nicely into these two groups, but I'm not so sure about political groups.
Comment author:Viliam_Bur
02 July 2013 07:31:48AM
*
2 points
[-]
Being specific is a good rationalist tool and a bad strategy for social relations. The more specific one is, the fewer people agree with them. The best social strategy is to have a few fuzzy applause lights and gather agreement about them.
I'll try to find a less sensitive political example. Some people near me are fans of "direct democracy"; they propose it as a cure for all the political problems. I try being more specific and ask whether they imagine that people in the whole country will vote together, or that each region will vote separately on their local policies... but they refuse to discuss this, because they see that it would split their nicely agreeing group into disagreeing subgroups. But for me this distinction seems very important in predicting the consequences of such system.
Say there are two artificial intelligences... When these machines want to talk to each other, my guess is they'll get right next to each other so they can have very wide-band communication. You might recognize them as Sam and George, and you'll walk up and knock on Sam and say, "Hi, Sam. What are you talking about?" What Sam will undoubtedly answer is, "Things in general," because there'll be no way for him to tell you. From the first knock until you finish the "t" in about, Sam probably will have said to George more utterances than have been uttered by all the people who have ever lived in all of their lives. I suspect there will be very little communication between machines and humans, because unless the machines condescend to talk to us about something that interests us, we'll have no communication.
On whether advanced AIs will share our goals:
Eventually, no matter what we do there'll be artificial intelligences with independent goals... There may be a way to postpone it. There may even be a way to avoid it, I don't know. But it's very hard to have a machine that's a million times smarter than you as your slave.
On basement AI:
Today I can buy a machine for five dollars that's better than one costing five million dollars twenty years ago... [One day] a paper boy with his route money will be able to save up in a month and buy such a machine. Thus anybody will have the necessary hardware to do Al pretty soon; it will be like a free commodity.
Now, under those circumstances, it's possible that some mad genius, some Newton-like person, even a kid working by himself, could make tremendous progress. He could develop AI all by himself, relying on what others do, but building it in private rather than at a big institution like MIT. And the application of such a machine would be irresistible. How could you avoid this? You can't license computers; that never was practical... If you made the use of electricity in any way a capital offense, worldwide and suddenly, and you did it immediately... then perhaps you could prevent this from happening. But anything short of that isn't going to do it, because you won't need a laboratory with big government funding very soon - that's only a temporary phase we're passing through. So what Joe Weizenbaum would like to do is impossible-its bringing time to a halt, and it can't be done. What we can do is make the future more secure for human beings by being reasonable about how you bring AI about, and the only reasonable course is to work on this problem in a way that promises to be best for all of society, and not just for some singular mad genius.
On the risk of bad guys getting AI first:
What's equally frightening is that the world has developed means for destroying itself in a lot of different ways, global ways. There could be a thermonuclear war or a new kind of biological hazard or what-have-you. That we'll come through all this is possible but not probable unless a lot of people are consciously trying to avoid the disaster. McCarthy's solution of asking an artificial intelligence what we should do presumes the good guys have it first. But the good guys might not. And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn't get pulled under any circumstances, regardless of its real motives... I mean, it could toss us a few tidbits, like the cure for this and that.
I think there are ways to minimize all this, but the one thing we can't do is to say well, let's not work on it. Because someone, somewhere, will. The Russians certainly will - they're working on it like crazy, and its not that they're evil, its just that they also see that the guy who first develops a machine that can influence the world in a big way may be some mad scientist living in the mountains of Ecuador. And the only way we'd find out about some mad scientist doing artificial intelligence in the mountains of Ecuador is through another artificial intelligence doing the detection. Society as a whole must have the means to protect itself against such problems, and the means are the very same things we're protecting ourselves against.
On trying to raise awareness of AI risks:
I can't persuade anyone else in the field to worry this way... They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, "Well yes, you're right, but it would be a great disservice to the world to mention all this."...
...my colleagues only tell me to wait, not to make my pitch until it's more obvious that we'll have artificial intelligences. I think by then it'll be too late. Once artificial intelligences start getting smart, they're going to be very smart very fast. What's taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. Its not that the United States might take over the world, it's that Stanford AI Lab might.
Comment author:lukeprog
13 July 2013 05:19:43AM
*
0 points
[-]
Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...we have people who say we've got to solve problems of poverty and famine and so forth, and we shouldn't be working on things like artificial intelligence... [But I think] we should have a certain number of people worrying about... whether artificial intelligence will be a huge disaster some day or be one of the best events in the universe...
...You might be the only one who can help with the disaster that's going to happen [decades from now], and if you don't prepare yourself, and instead just go off into some social welfare project right now, who will do it then? ...Yes, I feel that there's a great enterprise going on which is making the world of the future all right.
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
Mallet, a notorious swindler, picks 10 stocks and generates all 1024 permutations of "stock will go up" vs. "stock will go down" predictions. He then gives his predictions to 1024 different investors. One of the investors receive a perfect, 10 out 10 prediction sheet and is (Mallet hopes) convinced Mallet is a stock picking genius.
Since it's related to the Texas sharpshooter fallacy, I'm tempted to call this the Texas stock-picking scam, but I was wondering if anyone knew a "proper" name for it, and/or any analysis of the scam.
Related to "magic phrases", what expressions or turns of phrase work for you, but don't work well for a typical real-world audience?
I tend to use "it's not magic" as shorthand for "it's not some inscrutable black-boxed phenomenon that defies analysis and reasoning". Moreover, I seem to have internalised this as a reaction whenever I hear someone describing something as if it were such a phenomenon. Using the phrase generally doesn't go down well, though.
If you read any amount of history, you will discover that people of various times and places have matter-of-factly believed things that today we find incredible (in the original sense of “not credible”). I have found, however, that one of the most interesting questions one can ask is “What if it really was like that?”
... What I’m encouraging is a variant of the exercise I’ve previously called “killing the Buddha”. Sometimes the consequences of supposing that our ancestors reported their experience of the world faithfully, and that their customs were rational adaptations to that experience, lead us to conclusions we find preposterous or uncomfortable. I think that the more uncomfortable we get, the more important it becomes to ask ourselves “What if it really was like that?”
In my experience, moral panics are almost never about what they claim to be about. I am just (barely) old enough to remember the tail end of the period (around 1965) when conservative panic about drugs and rock music was actually rooted in a not very-thinly-veiled fear of the corrupting influence of non-whites on pure American children. In retrospect it’s easy to understand as a reaction against the gradual breakdown of both legally enforced and de-facto racial segregation in the U.S.
Comment author:Alsadius
03 July 2013 06:20:38PM
5 points
[-]
It seems fairly believable that an oppressed underclass that is intentionally deprived of education and opportunity will, on average, be cruder, less intellectually inclined, have less wealth and status, and more prone to failing at life in various ways due to the lack of a support structure. This is true of any group, whatever their intrinsic nature, simply due to the act of discrimination.
I remember once reading an essay about Jews in(IIRC) Rudyard Kipling's works, where they're portrayed in pretty appalling ways, while all sorts of other groups are portrayed positively. The author came to the conclusion that acting in cowardly and profiteering fashion was a survival tactic created by anti-semitic laws, and that Kipling was probably just conveying the reality of the time. (I'm not enough of an expert to judge the truth of this, but it seemed reasonable)
Sure. Also see the recent follow-ups to the Stanford marshmallow experiment. It sure looks like some of what was once considered to be innate lack of self-restraint may rather be acquired by living in an environment where others are unreliable, promises are broken, etc.
Comment author:Desrtopa
08 October 2013 11:36:24PM
1 point
[-]
Possibly, but the followup only tells us that, at least in the short term, kids will be less likely to delay gratification from specific individuals who have proven to be untrustworthy (and the protocol of that experiment kind of went for overkill on the "demonstrating untrustworthiness" angle.)
It might be that children become less able to delay gratification if raised in environments where they cannot trust promises from their guidance figures, but the same effect could very easily be caused by rational discounting of the value of promises from individuals who have proven unlikely to deliver on them.
Comment author:Viliam_Bur
06 July 2013 05:02:26PM
2 points
[-]
Your argument sounds perfectly reasonable to me. Yet I would advise against reversing stupidity. Just because there is a systematic influence that makes it worse for the opressed people, it does not automatically mean that without that influence all the differences would disappear. Although it is worth trying experimentally.
Comment author:gjm
09 July 2013 09:38:35AM
*
7 points
[-]
{EDITED to clarify, as kinda suggested by wedrifid, some highly relevant context.}
This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it
addresses the question it's responding to
gives good reasons for making the guess JoshuaZ said he made
seems like it's at a pretty well calibrated level of confidence
is polite, on topic, and coherent.
A number of JoshuaZ's other recent comments there have received similar treatment. It seems a reasonable conclusion (though maybe there are other explanations?) that multiple LW accounts have, within a short period of time, been downvoting perfectly decent comments by JoshuaZ. As per other discussions in that thread [EDITED to add: see next paragraph for more specifics], this seems to have been provoked by his making some "pro-feminist" remarks in the discussions of that topic brought up by recent events in HPMOR.
{EDITED to add...} Highly relevant context: Elsewhere in the thread JoshuaZ reports that, apparently in response to his comments in that discussion, he has had a large number of comments on other topics downvoted in rapid succession. This, to my mind, greatly raises the probability that what's going on is "playing the man, not the ball": that the cause of the downvotes isn't simply that many LW participants disagree strongly with me about the merits of the individual comments.
It seems to me that this is a kind of abuse that needs to be stopped. To be clear, I don't mean abuse of JoshuaZ, who I bet is perfectly capable of handling it. I mean abuse of LW. Specifically, it appears to be a concerted attempt to shape discussions here not by rational argument, nor even by appeal to emotion, but by intimidation.
(I suppose I should mention an amusing contrary hypothesis to which I attach very low probability. Perhaps the downvotes are from friends of JoshuaZ, who hope to attract sympathy upvotes and will change their own downvotes to upvotes in a week or two when no one's watching any more.)
I would address this to the LW admins by PM if I knew who they are, but the only person I know to be an LW admin is Eliezer and I believe he's very busy at the moment.
{EDITED to add ...} One other remark, just in case of suspicions. I am not JoshuaZ, nor do I have any idea who he is outside LW, nor (so far as I know) have I had any interaction with him outside LW, nor have I had enough in-LW interaction with him to regard him as an ally or a friend or anything of the kind. There is no personal element to any of what I have said.
{Totally irrelevant remark: The squiggly brackets are because [this sort] which I'd normally use for noting what I've edited interacts badly with the Markdown hyperlink syntax.}
Comment author:wedrifid
09 July 2013 03:13:24PM
2 points
[-]
{Totally irrelevant remark: The squiggly brackets are because [this sort] which I'd normally use for noting what I've edited interacts badly with the Markdown hyperlink syntax.}
The escape character, which solves this and various other potential problems, is "\".
Comment author:wedrifid
09 July 2013 10:27:04AM
*
2 points
[-]
This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it
addresses the question it's responding to
gives good reasons for making the guess JoshuaZ said he made
seems like it's at a pretty well calibrated level of confidence
is polite, on topic, and coherent.
I downvoted that comment and encourage others to feel free to downvote any comment of a type they would prefer not to see on less wrong. Most (but not quite all) cases of people using social politics to force their preferences onto others are things I wish to see left of. This includes but is not limited to sex politics. Being 'on topic' is no virtue when the topic itself is toxic.
At your prompting I have downvoted the parent of the comment in question. To whatever extent a comment is justified by being an answer to a question the asking of said question must assume responsibility. For the same reason I have no objection if others choose to downvote my own contributions to that thread for what could be considered "Feeding". Kawoomba has a good point.
Note: This is a different issue to the systematic downvoting of a user on all subjects (sometimes referred to as 'karma assassination'). That is universally considered an abuse of the system. However the example you give only demonstrates your subjective disagreement with the evaluations of some others regarding the desirability of a particular comment.
You have defined your campaign by references to "this sort of abuse" where 'this' refers to comments like the example comment being downvoted. As such I cannot support it. People are allowed to not like stuff and vote it down. If you had instead made your campaign to be against karma-assassination then I would support it. I myself have lost several thousand karma in bursts like that. I suggest revision.
Comment author:gjm
09 July 2013 12:04:06PM
2 points
[-]
This is a different issue to [...] 'karma assassination'
It should be mentioned explicitly here -- as it has been in the discussion in the other thread, and as I know you have seen since you replied to it -- that JoshuaZ reports precisely the sort of "karma assassination" behaviour you describe, in connection with the same topic. It's because of that context that I think it likely that the highly negative score of his comment is at least partly the result of punitive downvoting aimed at him rather than at his comment specifically.
I shall amend my comment upthread to mention this context, which I agree is relevant.
Yes, it may be reasonable to decide that a particular topic is toxic and try to discourage all posting on that topic. (Though I think occasional comments arguing for dropping the subject would be a far better way of doing that than flinging downvotes around.) But that is plainly not what was happening, because only comments on one side of the issue were sitting at gratuitously low values relative to, for want of a better term, their topic-agnostic merit. (Your own description of your own actions is some evidence for this: you had downvoted that comment but not its parent.)
I am not sure why you think the word "campaign" is appropriate, though I can see why you might find it rhetorically convenient. I see what looks to me like a systematic attempt to stop LW participants expressing certain sorts of opinion, through intimidation rather than argument, I think that's bad, and I have said so a couple of times and expressed willingness to help technically if others agree with me. That's a "campaign"?
Comment author:gjm
09 July 2013 09:42:08AM
1 point
[-]
If the information needed to take action against this sort of abuse is difficult to do anything with because it requires grovelling through whatever database underlies LW, I hereby volunteer (if told it would be useful by someone with power to use it) to make whatever software enhancements to the LW code are required to make it easy.
(I have no experience with the LW codebase but am an experienced software person. Getting-started pointers would be welcome if anyone takes me up on that offer.)
This was something I was meaning to post about in some of the gender discussions, but I wasn't sure that a significant proportion of men were still put off by women who were direct about wanting sex with them-- but apparently, it's still pretty common.
Comment author:Viliam_Bur
14 July 2013 09:58:33AM
*
0 points
[-]
There is a difference between "I want sex with you specifically (because you attract me)", and "I want sex with anyone (and you are the nearest one)". For me, the former would feel nice, but the latter would feel... creepy.
This may be another situation of not being specific: when women report that "men were put off by them being direct about wanting sex with them", I don't know which one of these situations it was. Also, it depends on context; there is a difference between getting a sex offer from a friendly person in a romantic situation, or getting a sex offer from an unknown heavily drunk woman at a disco (happened to me, and yes I was put off). These details may change the situation, and are usually not reported, because of course the goal of report is to make the other people seem horrible.
Another possibility is that if a woman makes a courteous and straightforward statement of interest, there's no guarantee that the man is likewise interested, but she might interpret this as being wrong for being straightforward rather than that there was no way he was going to reciprocate.
From the comments, and I admit there was less than I thought there was going to be:
I remember when I first started dating my fiancé (four years ago now!), he told me that if a woman tells a man she likes him before he makes a move, he'll stop liking her. I thought this was supremely stupid, so I told my dad about it in a can-you-believe-he-said-that-roll-eyes fashion, but my dad actually told me he thinks that's true! I definitely want to break that train of thought in the next generation. It's really stupid, like teaching women that speaking up makes them unattractive.
And seriously, if I hadn't been direct about my feelings when I started dating my fiancé, it would've taken forever for our relationship to become more serious, because who wants to risk rejection by asking out someone who has given no indication that they like you?
**
It's fascinating to me because I'm recently divorced and back in the dating game, and I am failing miserably because I refuse to do this. I'm an honest and open person. I don't believe in playing games, and the games confuse me anyway. If I like someone, I tell them. And apparently this is now perceived as "coming on too strong" and is INTIMIDATING? My friends are convinced I push people away by being too forward. My gosh, I'm not saying, "Hello, nice to meet you, let's screw." I'm saying things like, "We've been talking for awhile and I think you're interesting. Let's get drinks sometimes." How is this intimidating? I think you hit the nail on the head.
**
One of the reasons women may not want to say yes immediately is the continuing social stigma attached to confident, self-assured women. I have always rejected the hard-to-get routine, and been clear in my requests for friendship or sex. Some men have been frightened by my assertiveness, but many have been relieved by it.
**
" I felt conflicted about being as blunt and up-front as I am about dating because it was contradictory to all the advice I'd always been given, but to do otherwise feels dishonest. However, everything paid off when I found my boyfriend. He's totally clueless when it comes to "playing the dating game" so he didn't push me, or try to read into what I said. He took my honesty at face value, and every "no" as a "no."
This one might be evidence-- it depends on what she meant by "everything paid off when I found my boyfriend". I'm inclined to think that her honesty didn't work a few times.
Comment author:Viliam_Bur
14 July 2013 04:59:16PM
*
1 point
[-]
Being the one who approaches has many advantages, but it comes with a cost -- one must learn to deal with rejection. There is a difference between knowing, generally, "my attractivity is probably average", and being specifically rejected by one specific sympathetic person who seemed to be interested just a while ago, but probably just wanted to talk.
Interpreting rejection as "these men are afraid of a honest / courageous woman" can help protect the ego. It could also be why the men said it -- to avoid an offense, a confrontation. (Women also say various things that don't make much sense, when they mean: "I don't consider you attractive.") I mean, if an extremely attractive woman would approach those men, a lot of them yould probably say yes and consider themselves lucky. (This is an experimentally testable prediction!)
I've been thinking about tacit knowledge recently.
A very concrete example of tacit knowledge that I rub up against on a regular basis is a basic understanding of file types. In the past I have needed to explain to educated and ostensibly computer-literate professionals under the age of 40 that a jpeg is an image, and a PDF is a document, and they're different kinds of entities that aren't freely interchangeable. It's difficult for me to imagine how someone could not know this. I don't recall ever having to learn it. It seems intuitively obvious. (Uh-oh!)
So I wonder if there aren't some massive gains to be had from understanding tacit knowledge more than I do. Some applications:
Being aware of the things I know which are tacit knowledge, but not common knowledge
Building environments that impart tacit knowledge, (eg. through well-designed interfaces and clear conceptual models)
Structuring my own environment so I can more readily take on knowledge without apparent effort
Imparting useful memes implicitly to the people around me without them noticing
What do you think or know about tacit knowledge, LessWrong? Tell me. It might not be obvious.
That isn't the standard use of "tacit knowledge." At least it doesn't match the definition. Tacit knowledge is supposed to be about things that are hard to communicate. The standard examples are physical activities.
Maybe knowing when to pay attention to file extensions is tacit knowledge, but the list of what they mean is easy to write down, even if it is a very long list. Knowing that it valuable to know about them is probably the key that these people were missing, or perhaps they failed to accurate assess the detail and correctness of their beliefs about file types.
The PDF has additional structure which can support such functionality as copying text, hyperlinks, etc, but the primary function of a PDF is to represent a specific image (particularly, the same image whether displayed on screen or on paper).
Certainly a PDF is more "document"-ish than a JPEG, but there are also "document" qualities a PDF is notably lacking, such as being able to edit it and have the text reflow appropriately (which comes from having a structure of higher-level objects like "this text is in two columns with margins like so" and "this is a figure with caption" and an algorithm to do the layout). To say that there is a sharp line and that PDF belongs on the "document" side is, in my opinion, a poor use of words.
I'm not sure I want to get into an ontological debate on whether a PDF is a document or not, but I believe the fact that it's got the word "document" in its name and is primarily used for representing de facto documents makes my original statement accurate to several orders of approximation.
Comment author:elharo
05 July 2013 01:38:39PM
*
0 points
[-]
Uh-oh indeed. Like most statements involving the word "is", this is probably one of those questions that should be dissolved. Thus I will ask:
What do you mean when you say document? I.e. what are the characteristics that a document has which a JPEG file does not, and which a PDF does have? Why is it wrong for something that is an image to also be a document?
This seems to be actively running away from the point. Also, see the other response re: my lack of interest in this particular ontological discussion.
In my example, there's also a concrete reason to distinguish between images and documents. The image is going to be embedded on a webpage, where people will simply look at it. Meanwhile, the document is going to be printed off as an actual physical document. Their respective formats are generally optimised for these different purposes.
Comment author:Viliam_Bur
06 July 2013 11:27:41AM
*
1 point
[-]
I'll try: You don't need OCR to get the words out of the document. An image is just dots and/or geometric shapes. (Which would make a copy-protected PDF not a document.)
I've been talking to some friends who have some rather odd spiritual (in the sense of disorganised religion) beliefs. Odd because its a combination of modem philosophy LW would be familiar with (acausal commuication between worlds of Tegmark's level IV multiverse) ancient religion, and general weirdness. I have trouble pointing my finger at exactly what is wrong with this reasoning, although I'm fairly sure there is a flaw, in the same way I'm quite sure I'm not a Boltzmann brain, but it can be hard articulating why.
So, if anyone is interested, here is the reasoning:
1) Dualism is wrong, due to major philosophical problems as well as Occam's razor
2) I think therefore I am, so I know that the 'mental world' exists.
3) Therefore Idealism is true, the mental world exists but the physical is just an illusion
4) In response to 'so why can't you fly?' the answer is a lack of mental discipline: after all, its hard to control your thoughts
5) If two different people existed in the same universe, there is no reason why they would perceive the same illusions.
6) Therefore, each universe consists of one conscious observer and their illusory reality
7) But Tegmark's level IV multiverse is true, so we can acausaly communicate between worlds, in fact all conversations are actually acausal communication between worlds.
8) This also implies there is reincarnation, of a sort - there is no body to die, so you just construct a new illusory reality.
From here on it gets into more standard 'spiritual' realms, although I did find it amusing when my friend told me that there are at least aleph-2 gods.
I should state that these beliefs are largely pointless, in that its not obvious that they actually influence any decisions the believers make, and that they do seem to make people happy without any major downsides.
I should also make it clear that I don't believe this, because I wouldn't want to lose status as a rationalist by believing in something unpopular!
TL;DR
To a large extent, this boils down to: how do I distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.
Comment author:Viliam_Bur
06 July 2013 04:53:17PM
6 points
[-]
In response to 'so why can't you fly?' the answer is a lack of mental discipline: after all, its hard to control your thoughts
How do you know what you claim to know? (Okay, not you, but whoever said this.) Do you have any reproducible experimental proof of whatever violation of physical laws using mental discipline?
Isn't it suspicious that undisciplined thoughts are enough to create an illusion of physical reality perfectly obeying the physical laws, but are unable to violate the laws? That sounds to me like speaking about an archer who always perfectly hits the middle of the target, but is unable to shoot the arrow outside of the target, supposedly because he is too clumsy. I mean, isn't hitting the center of the target more difficult that missing the target? Wouldn't creating a reality perfectly obeying the laws of physics all the time require more mental discipline than having things happen randomly?
I am sure there can be dozen ad-hoc explanations, I just wanted to show how it doesn't make sense.
This also implies there is reincarnation, of a sort - there is no body to die, so you just construct a new illusory reality.
So, if you get killed, your mental discipline will improve enough to let you create new reality you can't create now? Interesting...
Comment author:Alejandro1
29 July 2013 12:32:53AM
2 points
[-]
It looks like you and your friends have rediscovered Lebniz's monadology. Leibniz believed that only minds were real, matter as distinct from minds is an illusion, and minds do not interact causally, but they seem to share a same "reality" by virtue of a "pre-established harmony" between their non-causally related experiences. This last part can perhaps be reexpressed in modern terms as acausal communication.
Comment author:metatroll
09 July 2013 11:45:26AM
2 points
[-]
I guess the fact that I lack mental discipline is also the reason that I lack mental discipline, and the reason that lacking mental discipline causes me to lack mental discipline, too.
Sorry, to clarify, are you saying that the reasoning is circular and thus faulty?
Thing about the mental health is that it is circular, in that there are vicious cycles. If I have mental discipline, I can discipline myself to practice discipline more.
Where do your friends get this stuff? Did they read the Sequences on LSD or something? Do they do anything differently in everyday life on account of it (besides talking about it)?
To a large extent, this boils down to: how do I distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.
I am surprised no one else has brought up the LW party line: consequentialism.
What is the alternative?
What is the consequence of your decision?
Probably the alternative is that someone else donates sperm. Either way, they raise a child that is not the husband's. If creating such a life is terrible (which I don't believe), is it worse that it is your child than someone else's? Consequentialism rejects the idea that you are complicit in one circumstance and not the other.
There are other options, like trying to convince them not to have children, or to get a donation from the husband's relatives, but they are unlikely to work.
If the choice is between your sperm or another's, then, as Qiaochu says, the main difference to the child is genes of the donor. Also, your decision might affect your relationship with the couple.
Assuming you don't have any particular reason to expect that this couple will be abusive, it's more ethical the better your genes are. If you have high IQ or other desirable heritable traits, great. (It seems plausible to anticipate that high IQ will become even more closely correlated with success in the future than it is now.) If you have mutations that might cause horrible genetic disorders, less great.
The child is wanted, so if they don't actually neglect it it'll grow up fine.
Note that if you donate sperm without going through the appropriate regulatory hoops as a sperm donor (which vary per country), you will be liable for child support.
It creates a child who will not be raised by their biological father.
since you might be legally on the hook for child support.
Unlikely in this context, since they are much wealthier than I. I doubt they would want to share custody with me in exchange for my pittance of a salary.
Unlikely in this context, since they are much wealthier than I. I doubt they would want to share custody with me in exchange for my pittance of a salary.
They might die and the child has still rights against you.
Comment author:falenas108
02 July 2013 05:20:51PM
4 points
[-]
Questions about the validity of the Cinderella effect aside, the OP knows the couple and can probably make a more informed judgement about this.
Of course, you can't tell this perfectly. But if the OP is anything more than casual acquaintances with the couple, I would say specific evidence probably overpowers the general case.
Comment author:Torello
01 July 2013 06:20:52PM
1 point
[-]
Seeking Educational Advice...
I imagine some LW user have these questions, or can answer them. Sorry if this isn’t the right place (but point me to the right place please!).
I’m thinking of returning to university to study evolution/biology, the mind, IT, science-type stuff.
Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?
How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in education which was so easy I don't know if I could make the grades in a demanding field).
My first undergrad degree (education) was non-science, so should I go back for another undergrad degree, or try to fill gaps on my own and do a post-grad in something with science?
I've started investigating free online education (lesswrong, edx, coursera, etc) but I have concerns: don't I need credentials? Don't I need classmates/colleagues/collaborators to help teach me, motivate me, and supply me with equipment? How do I know if I really understand the material? How do I address these concerns?
p.s. – I’m all for “munchkin” style answers/solutions to these problems, so long as they are actually feasible
Do you care about the piece of paper? If not, you can likely attend courses in the literal sense - just show up for the lectures - without paying anything at all. Old textbooks are cheap, if you want problem sets, and you almost certainly do - I strongly opine that you cannot learn anything even remotely math-oriented without doing problems. But no rule says you have to do the same problems the others in the class are doing.
Clearly, this is not the method for you if you need a lot of feedback and guidance, nor if you want the credential in addition to the knowledge.
Comment author:Kawoomba
01 July 2013 06:28:22PM
3 points
[-]
Mostly depends on what languages you speak fluently, what countries you can obtain visas for, your willingness to relocate to said countries and your plans on what you'll do with the "science-type stuff". If you want advice, edit your post accordingly. Most of the answers will come out to public colleges in your home state, or Europe. Or plumbing.
How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in education which was so easy I don't know if I could make the grades in a demanding field).
Get a textbook of the appropriate level on the subject that has exercises and the correct answers to them, read the book, then do the exercises and see what you come up with? If it's math or physics, you should be able to tell by yourself whether your solutions resemble the example solutions in the text, seem to make sense and come up with the correct answers.
I don't know how well this will work with evolutionary biology or cognitive science. If you want to include philosophy in the "mind" part, it's my understanding that you need to be a trained academic philosopher to reliably tell fancy garbage and acceptable academic philosophy apart, so the approach probably won't work there.
After reading a couple of introductory textbooks, try to find grad students in the field in online chats and ask them about the stuff to gauge how well you've understood it. You can probably find plenty of math and computer science literate people on Lesswrong to bounce stuff off of.
Also, do you actually know you need to attend lectures to learn things, or are you just planning to do this because attending lectures is what people who get educated are supposed to do in the standard narrative? I'm pretty much incapable of following spoken academic lectures myself, and basically learn most everything by reading. If I wanted to get an education, I'd just go for a big stack of textbooks and a good note-taking system and ignore live lectures entirely at least on the undergrad level.
Comment author:ChristianKl
02 July 2013 01:15:45PM
*
1 point
[-]
Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?
Of course, I just go to any university in my city and they don't cost anything.
I've started investigating free online education (lesswrong, edx, coursera, etc) but I have concerns: don't I need credentials?
Whether or not you need credentials depends on your goals. Yudkowsky started SI/MIRI without any credentials.
Don't I need classmates/colleagues/collaborators to help teach me,
When it comes to programming questions that I face as part of my university studies I go to StackOverflow.
motivate me,
Depends on your ability to self motivate.
and supply me with equipment?
Depends on whether you want to do something that needs equipment.
How do I know if I really understand the material?
If you can remember the Anki cards about a topic it's likely that you understand the topic. But more importantly, what's your goal?
What do you want to be able to do with your "understanding of the material"?
Comment author:pragmatist
02 July 2013 04:41:20AM
*
1 point
[-]
I went to a college in the United States where admissions are need-blind (they don't consider how much financial aid you'll need in their decision to admit you) and that offers full-need aid (once admitted, they will meet any financial need you demonstrate). I was an international student, so the aid was not in the form of a loan, but a straight-up grant. I basically ended up paying nothing to go to a college that normally charges $60k+ a year. So if you're not American, this is a possibility. If you are American, I understand that most (all?) of the financial aid is in the form of federal loans, which you may or may not want to incur.
Wikipedia says there are only seven US universities that offer full need-blind aid to international students. There are many more that are need-blind and full-need for US students, although this will probably involve loans. That Wikipedia page also lists four non-US universities that offer need-blind and full-need aid to all applicants. If you are American, applying to one of those may be a better bet, because you might get a grant instead of a loan. I've heard good things about the National University of Singapore.
Comment author:ciphergoth
11 July 2013 07:59:10PM
1 point
[-]
Is there a way of making precise, and proving, something like this?
For any noisy dynamic system describable with differential equations, observed through a noisy digitised channel, there exists a program which produces an output stream indistinguishable from the system.
It would be good to add some notion of input too.
There are several issues with making this precise and avoiding certain problems, but I suspect all of this is already solved so it's probably not worth me going into detail here. In the unlikely event this isn't already a solved problem, I could have a go at precisely stating and proving this.
I don't completely understand what you mean (in particular, I would really like you to be more specific about what you mean by "noisy" and "indistinguishable"), but this looks like it shouldn't be true on cardinality grounds. There should be uncountably many possible distinguishable noisy behaviors of a dynamical system.
Comment author:ciphergoth
12 July 2013 07:31:17AM
1 point
[-]
By "indistinguishable" I mean some sort of bound on the advantage of any algorithm trying to tell the two apart. I think if I try to answer on "noisy" without knowing more about what you need specified I won't answer your question - I'm thinking of some sort of continuous equivalent to the role that noise plays in a Kalman filter.
The cardinality thing is a big problem - if the "system" is a single uncomputable real number that doesn't change, from which we take multiple noisy readings, then for any program that tries to emulate it, there is a distinguishing program whose advantage approaches 1 as the number of readings go up.
It still feels like there must be something like this that we can prove!
That said, I suspect that this is not a major aspect of the Filter. If the cost goes up, the main impact would be on consumer goods which would become more expensive. That's unpleasant but not a Filter event. It also isn't relevant from the standpoint of resources necessary to bootstrap us back up to the current tech level in event of a major disaster since there will be all sorts of nearly pure copper that could be scavenged from the remains of civilization.
This may however be a strong argument for either finding new copper replacements (possibly novel alloys), or for the development of asteroid mining which will help out with a lot of different metals.
Comment author:BerryPick6
07 July 2013 10:28:43PM
1 point
[-]
I need help finding a particular thread on LW, it was a discussion of either utility or ethics, and it utilized the symbols Q and Q* extensively, as well as talking about Lost Purposes. My inability to locate it is causing me brain hurt.
Comment author:gwern
08 July 2013 07:54:28PM
2 points
[-]
That's hard, because search engines have been dumbed down to the point where you can't google for a literal 'Q*'... A local search turned up http://lesswrong.com/lw/1zv/the_shabbos_goy/ as having one use of 'Q*' and bringing up 'lost purposes'.
Comment author:BerryPick6
09 July 2013 12:09:39AM
2 points
[-]
Probably made even more difficult because I misremembered the letter. It was G*, and the article was The Importance of Goodhart's Law. It suddenly came back to me in a flash after seeing your reply, so thanks!
It's particularly ironic because in that very post, he mentions:
I can’t find the link for this, but negatively phrased information can sometimes reinforce the positive version of that information.
Which seems to be what I am falling for. He outright says:
I think some of the arguments below will be completely correct, others correct only in certain senses and situations, and still others intriguing but wrong. I think that modern pop social psychology probably contains the same three categories in about the same breakdown, so I don’t feel too bad about this.
So to sum up, here is my experience:
1: Yvain: "Here are some arguments. I don't fully believe most of them."
2: I start reading.
3: Michaelos: "Huh. All of these seem to be somewhat well reasoned arguments, there are links, and I can follow the logic on most of them."
4: At some point, I forget the "Yvain doesn't believe this." Tag.
5: I then read his summary which points out that these also have entirely opposite summaries which are also justified.
6: I find myself flabbergasted that I've made the same mistake about Yvain's writing again.
Based on this, I get the feeling I should be doing something differently when I read Yvain's articles, but I'm not even sure what that something is.
you should probably update towards "being convincing to me is not sufficient evidence of truth." Everything got easier once I stopped believing I was competent to judge claims about X by people who investigate X professionally. I find it better to investigate their epistemic hygiene rather than their claims. If their epistemic hygiene seems good (can be domain specific) I update towards their conclusions on X.
Comment author:Alsadius
03 July 2013 05:03:27PM
1 point
[-]
I'm looking for good, free, online resources on SQL and/or VBA programming, to aid in the hunt for my next job. Does anyone have any useful links? As well, any suggestions on a good program to use for SQL?
Comment author:luminosity
03 July 2013 11:37:59PM
2 points
[-]
What do you mean by "a good program to use for SQL?" A database engine to run queries in? A command line or GUI client for connecting to such a database? Something else entirely.
For what it's worth if you're looking for a database engine, my recommendation is Postgres. Free, open source, and a lot stricter than MySQL, even if you make MySQL as strict as you possibly can.
As for learning, I don't know any tutorials that are still around nowadays. I do recommend if you're learning it, to actually build something where you need to use queries.
Toy example: Build a weblog.
Start by creating tables for posts and comments.
Write an admin interface for creating new posts. Write a form for saving comments on a post.
Then simple queries to pull out the latest few posts for display, and comments for display on a post's page.
Add a simple tagging and other meta data facility.
Write some reports using data available to you (eg, find top ten most commented posts, find most used tag, if you add viewer ratings, or unique view tracking, then grabbing most viewed posts, and counts of times viewed).
This should take you through exercises from very basic and easy statements through to some more advanced topics (grouping etc), and I find using a skill incredibly valuable to learning and internalising that skill. My first computer program was a blog, and while it was a disaster and a mess in many ways, I learned (or internalised) a lot about programming, and a lot about SQL in the process.
Comment author:Emile
02 July 2013 09:47:36PM
3 points
[-]
I answered as close as I can remember, but I think touch typing is more of something I kind of picked up as time went by, rather than something I specifically learned at one point in time. I remember pushing myself to practice touch typing at various points, but the general recollection I have is that I didn't really practice in a focused, systematic way, and yet now I can type this without needing to look at my keyboard (and in fact, when I look at my keyboard I'll be likely to make more mistakes).
So I probably picked it up in my early twenties with a lot of typing of homework and essays and posts on forums.
Comment author:kpreid
03 July 2013 03:16:31AM
2 points
[-]
By “touch-typing” do you mean typing without looking at the keyboard, that and typing using all ten fingers, or that and using the formal start-with-your-fingers-on-the-home-row techniques?
Comment author:AlexSchell
05 July 2013 09:59:47PM
1 point
[-]
For consistency with previous results, answer using your best guess as to what I mean. (Jura V nfxrq, V zrnag gur jubyr ubzrebj ohfvarff ohg V qvqa'g ernyvmr gung gbhpu-glcvat vf glcvpnyyl qrsvarq va gur oebnqrfg frafr lbh zragvba.)
Comment author:Rukifellth
13 July 2013 11:52:56PM
*
-2 points
[-]
I personally regard this entire subject as a memetic hazard, and will rot13 accordingly.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
... gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.
Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.
V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:
PV pnaabg znantr fngvfsnpgbevyl gur "pbagvahvgl ceboyrz" (jung znxrf lbh gb pbagvahr gb erznva lbh va gvzr). Guvf vf jul va "Ernfba naq Crefbaf", Qrerx Cnesvg cebcbfrq RV nf n fbyhgvba. Va "V Nz Lbh", Qnavry Xbynx cebcbfrq BV, fubjvat gung grpuavpnyyl gurl ner rdhvinyrag. Fb pubbfvat orgjrra RV naq BV frrzf gb or n znggre bs crefbany gnfgr. Znlor gurve qvssreraprf zvtug or erqhprq gb n grezvabybtl ceboyrz. Bgurejvfr, V pbafvqre BV zber fgebat orpnhfr vg pna rkcynva jung V pnyyrq "gur vaqvivqhny rkvfgragvny ceboyrz" [Jung jr zrna jura jr nfx bhefryirf "Pbhyq V unir arire rkvfgrq?"]
Gur rrevrfg cneg nobhg gur Snprobbx tebhc "V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz" vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg'f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba'g rira pner.
Comment author:Thomas
16 July 2013 04:03:44PM
1 point
[-]
I think it is true. Self awareness is not hardware (wetware, whatever-ware) dependent. Just upload yourself and everything would be just fine. You'll be on two places at the same time, but with no communications between your instances, the old and the new one.
The same situation here, only that you have more than one natural born upload. Many billion, in fact.
The naturalism leads to this (frightening) conclusion.
Comment author:Thomas
16 July 2013 04:58:36PM
*
0 points
[-]
I am not sure what do you mean by this blackboxing.
But to think, that the process of consciousnesses will work inside a computer, but will not work inside some other human skull - is naive.
It should work either on both places or nowhere.
People respond to this with "My memories are crucial, they are my unique identifier!". Well, you can forget pretty much everything and you will feel the same way. Besides, at every moment that you are self aware, you are remembering different little pieces of everything, doesn't matter what exactly. Might be a memory of a total solar eclipse, millions have almost the same short movie in their heads. Nothing unique here,
The consciousnesses is a funny algorithm, running everywhere. This is why, you should care about the future and behave accordingly at the present time.
Comment author:Rukifellth
16 July 2013 05:12:19PM
1 point
[-]
Black boxing is when a complicated process is skipped over in reasoning. You supposed that mind uploading was possible for the sake of argument, to support a conclusion outside of the argument.
Comment author:Thomas
16 July 2013 05:24:29PM
1 point
[-]
I see no reason, why uploading would be impossible. As I see no reason, why interstellar traveling would be impossible.
I have no idea how to actually do both, but that's another matter.
If the naturalistic view is valid, it is difficult to see a reason why those two would be impossible. But if the Universe is a magic place, then of course. It's possible that they are both impossible due to some spell of a witch, or something.
Still, I do assign a small probability to the possibility, that the consciousnesses is something not entirely computable and therefor not executable on some model of a Turing machine. But then again, the probability for this I see quite negligible.
Comment author:Thomas
16 July 2013 06:47:53PM
*
0 points
[-]
Of course. If some of us are right, the consciousness is an algorithm running on a substrate able to compute it.
Then, the transplantation to another substrate is sure possible. How difficult this copping actually is, I wonder.
That all, assuming no magic is involved here. No spirituality, no soul and no other holly crap.
But when we embrace the algorithmic nature of consciousness, intelligence, memories and so on, we lose the unique identifier, so dear to most otherwise rational people. Their mantra goes "You only live once!" or "Everyone is unique and unrepeatable person!". Yes, sure. So when I was born, a signal traveled across the Universe to change it from the place I could be born, to a place this possibility now expired for good? May I ask, is this signal faster then light? If it isn't ... well, it isn't good enough.
I am just an algorithm, being computed here and there, before and now.
Comment author:Rukifellth
17 July 2013 10:44:20PM
*
0 points
[-]
I forgot to mention this, but I also tried my hand at writing an essay about this sort of thing: finding the physical manifestation of consciousness. If I could vouch for the rigor of it, I'd have posted it to the Facebook group already, but alas, I can't., though it may be of some use here.
Identifying the physical manifestation of consciousness.
Identifying the final place where physical cause and mental effect meet has been one of neuroscience's top questions, and as many of us know, is known as the "Hard Problem". I'd like to try my hand at making a set of rules for the development of a procedure that would pry out the location of that "final destination". The process is by elimination, ruling out as many intermediaries between consciousness and cause as possible until no intermediary remains. At such a point, it must be concluded that the cause in question is consciousness itself. The principles outlined identify the characteristics of an intermediary, so that they may be cut out. A cause is only an intermediary if it violates any one of these principles:
Instantaneous Change: A change to this physical thing must create an immediate change in mental state. For example, if the heart is our soul, shooting a person in the heart shouldn't even leave a millisecond of perception, or people with heart disease should also develop psychiatric symptoms not attributable to stress in the course of their illness.
Predictable Change: If a small change in physical state produces a small change in mental state, then a increasing the magnitude of that same change should increase the corresponding mental state without producing any surprises. If increasing that physical change begins to produce the effects of a smaller, but different physical change, then there's still an intermediary between physical and mental. For example, SSRI's lift certain kinds of depression, but continued usage can "burn out" serotonin receptors, which means that chemicals like SSRI's cannot possibly be considered "units of consciousness".
Unique Change/Repeatability: A change in the state of this physical thing must create a mental state that is unique to that physical change. In graphing terms, value x cannot map to more than one value of y. If there's more than one possible y value or multiple's x's can create the same y, then there's still an intermediary between physical and mental. For example, and continuing from above, one could start to wonder if "receptors" are the "units of consciousness" and work from there by asking if it's possible to reproduce a mental state using something other than neurotransmitter receptors. If this possible, then the "unique change" clause is violated by having multiple x's mapping onto the same y, which implies that there's an intermediary between neurotransmitter receptors and mental states.
Suppose an LED and its switch are the same thing. To demonstrate this, we put it through the three ( principles to see how the system behaves. Failing any one of these tests indicates that we need to go deeper.
For the Instantaneous Change principle, we can just grab a hypothetical Planck-time high speed camera. If the state change of both the light and the switch are both perfectly in sync with each other even at Planck-time, then they are both the same object. This is not the case, as even the femto-second camera demonstrated on TED Talks could show.
The Predictable Change principle is unapplicable, because there are only two possible states, on/off for the switch, and their two directly correlated states, on/off for the light, so we move on. We can't very well add a third state for the switch and and expect any kind of change.
Unique Change can be tested by looking at the switch. It appears to be between a power source and the LED light. The method of Alexander the Great would have us cut the switch out of the circuit and see what happens when we pull the wires together. Do the wires, which have the two states, connected/unconnected, correlate directly with the LD's states of on/off? If so, then the switch was not the LED, for the states of the LED are not permanently changed.
Comment author:Rukifellth
15 July 2013 11:55:53AM
*
0 points
[-]
Honestly? I'd start taking antidepressants, and then embark on a a life-long quest to destroy the Universe via high energy particle experiments, or perhaps an unfriendly AI.
I endorse this theory and it all adds up to normality: in the end, the theories that you offer as alternatives are all true. (I have not read anything other than your comment.)
Comment author:Rukifellth
16 July 2013 01:24:14AM
0 points
[-]
How can they, if they're mutually exclusive?
Whew, Karma. Also, why did this get downvoted so much? I'd appreciate the skepticism a lot more in the form of an argument. (No, seriously, I'd appreciate skeptical argument way more than any abstract philosophical argument should be appreciated)
Comment author:drethelin
15 July 2013 01:42:47AM
0 points
[-]
If there's only one person and everyone else is simulated in their minds then that simulation is powerful and uncontrollable enough that for all practical purposes they can act like there are other people.
Comment author:Rukifellth
15 July 2013 02:21:04AM
0 points
[-]
This concept is unlike your example, because it is still possible for this one person carrying the simulation to create an offspring or clone, and it would in time become two separate people. Open Individualism states that if the one person carrying the simulation were to somehow reproduce themselves, there would still only be one person.
Comment author:Martin-2
15 July 2013 06:12:25PM
*
0 points
[-]
Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.
Comment author:beoShaffer
13 July 2013 11:51:54PM
0 points
[-]
Meta question. Is it better to correct typos and minor, verifiable factual errors (e.g. a date being a year off) in a post in the post's comment thread or a PM to the author?
Comment author:gwern
14 July 2013 12:08:24AM
2 points
[-]
I prefer PMs and do them often for both comments & posts. A minor correction is of no enduring interest and it's better if it didn't take up space publicly. (Can you imagine if every Wikipedia article could only be read as a sequence of diffs? That's what doing minor corrections in public is like.)
Comments (342)
I recently remarked that the phrase "that doesn't seem obvious to me" is good at getting people to reassess their stated beliefs without antagonising them into a defensive position, and as such it was on my list of "magic phrases". More recently I've been using "can you give a specific example?" for the same purpose.
What expressions or turns of phrase do you find particularly useful in encouraging others, or yourself, to think to a higher standard?
depersonalizing the argument is something I've had great success with. Steelmanning someone's argument directly is insulting, but steelmanning it by stating that it is similar to the position of high status person X, who is opposed by the viewpoint of high status person Y allows you to discuss otherwise inflammatory ideas dispassionately.
I've experimented with repersonalizing arguments: instead of challenging someone else for holding a belief, I direct the challenge at myself by putting their argument in my own mouth and saying what contrary evidence prevents me from believing it.
Someone else: You know that global warming business is a load of rubbish, right? Isn't real.
Me: That's not obvious to me. There are records of global average surface temperatures going back 150 years or so.
Someone else: Well, they can't know what the temperature was like before then.
Me: I'm sometimes inclined to think so, but then I'd have to contend with the variety of records based on tree rings, ice cores, and boreholes which go back centuries or millennia.
This is not quite what you want, but if you are a grad student giving a talk and a senior person prefaces her question to you with "I am confused about...", you are likely talking nonsense and they are too polite to tell you straight up.
Which reminds me of my born-again Christian mother - evangelicals bend over backwards to avoid dissing each other, so if you call someone "interesting" in a certain tone of voice it means "dangerous lunatic" and people take due warning. (May vary, this is in Perth, Australia.)
Alternatively "I may hve misunderstood but surely...." is a good way to couch an objection
I frequently use "Hmm, it's not entirely clear to me that [X]...", which seems very directly analogous to yours.
I like this, and also "I don't quite understand why [X]", which puts them in the pleasant position of explaining to me from a position of superiority--or sometimes realizing that they can't.
I guess this only works on people who feel friendly. Making them also feel superior... now they owe you a decent explanation.
A hostile person could find other way to feel superior, without explanation. For example, they could say: "Just use google to educate yourself, dummy!"
There's something that happens to me with an alarming frequency, something that I almost never (or don't remember) see being referenced (and thus I don't know the proper name). I'm talking about that effect when I'm reading a text (any kind of text, textbook, blog, forum text) and suddenly I discover that two minutes passed and I advanced six lines in the text, but I just have no idea of what I read. It's like a time blackhole, and now I have to re-read it.
Sometimes it also happens in a less alarming way, but still bad: for instance, when I'm reading something that is deliberately teaching me an important piece of knowledge (as in, I already know whathever is in this text IS important) I happen to go through it without questioning anything, just "accepting" it and a few moments later it suddenly comes down on me when I'm ahead: "Wait... what, did he just say 2 pages ago that thermal radiation does NOT need matter to propagate?" and I have again to go back and check that I was not crazy.
While I don't know the name of this effect, I have asked some acquantainces of mine about that, while some agreed that they have it others didn't. I would like very much to eliminate this flaw, anybody knows what I could do to train myself not to do it or at least the correct name so I can research more about it?
I give you credit for noticing you're running on automatic in as little as five minutes.
This is a guess, but meditation might help since it's a way of training the ability to focus.
Are you sleep deprived? This kind of attention lapse sounds like the calling card of a microsleep.
If it's material you want to/are required to learn from try taking notes as you read the material, to force yourself to recall it in your own terms/language.
If it's just recreational/online reading try increasing the font size/spacing or decreasing the browser width, or using a browser extension like readability. Don't scroll with the scroll bar or the mouse wheel - use pg up/pg down to make it easier to keep your position.
I don't know if I deliberately developed a habit of highlighting the current paragraph when reading long articles, but it has become extremely useful.
In the same vein, I get easily distracted when reading text and the ability to click around, select and deselect text that I'm reading helps me to stay engaged.
Writing that out it sounds like it would be super distracting but its not (for me). Possibly related to the phenomenon where some people work better with noise in the background rather than in silence. Clicking around might help maintain a minimum level of stimulation while reading.
I do this all the time. I have seen it referred to in literature (a character reading a page three times before realising he can't take it in, as a way to show that he's extremely distracted), but that's not quite the same as just zoning out.
Probably automaticity is what you are looking for. I am not sure how to force one's mind to attend to a repetitive task. One trick for avoiding reading automaticity is to paraphrase and check for potential BS every paragraph or so.
Indeed it's something along those lines, however, in the article it's represented in a positive light, where
My problem is that, somehow, I do that, but without comprehending anything. The article linked to an interesting program in Australia, though, QuickSmart. It's aimed at middle students, but I think I could perhaps benefinit from it.
I can't remember where I read it, but I remember hearing that in order to really understand an argument, you have to take a leap of faith & accept all of the propositions & conclusions in that argument. If you don't, you will be automatically & subconsciously strawmanning it. After you've exposed yourself to the whole idea, you can go back & look at it critically. I have no idea if this is BS & wish I could track down where I came across it. Cheers to any help.
I have this happen sometimes - usually it's because I let my mind wander to something unrelated but I kept my eyes moving out of habit.
So, everyone agrees that commuting is terrible for the happiness of the commuter. One thing I've struggled to find much evidence about is how much the method of commute matters. If I get to commute to work in a chauffeur driven limo, is that better than driving myself? What if I live a 10 minute drive/45 minute walk from work, am I better off walking? How does public transport compare to driving?
I suspect the majority of these studies are done in US cities, so mostly cover people who drive to work (with maybe a minority who use transit). I've come across a couple of articles which suggest cycling > driving here and conflicting views on whether driving > public transit here but they're just individual studies - I was wondering if there's much more known about this, and figured that if there is, someone here probably knows it. If no one does, I might get round to a more thorough perusal of the literature myself now I've publicly announced that the subject interests me.
I think it entirely depends on what you do during your commute.
A lot of drivers who drive during rush hour feel stress because they get annoyed at the behavior of other drivers. That's terrible for the happiness of the commuter.
Traveling via public transport also gives you plenty of opportunities to get upset over other people. It provides you the opportunity to get upset if the bus comes a bit late.
If you travel via public transport you can do tasks like reading a book that you can't do while driving a car or cycling.
Does anyone else experience the phenomenon of perceiving the duration of a commute to be shorter when the distance is shorter? For example, it feels like it takes less time or is more enjoyable to walk 3/4 mile in 15 minutes than to travel a few miles by subway in 15 minutes. I think its because being close in proximity makes me feel like "Hey I'm basically there already" where as traveling a few miles makes me think "I'm not even in the same neighborhood yet" even though both of these take me the same amount of time.
For me an important aspect is feeling of control. 15 minutes of walking is more pleasant that 10 minutes of waiting for bus and 5 minutes of travelling by bus.
Every now and then, I decide that I don't have the patience to wait 10 minutes for a bus that would take me to where I'm going in 10 minutes. So I walk, which takes me an hour.
I had the opposite effect recently - I thought that I'd save time by waiting for the bus, but it turns out that walking gets me to work from the train about 12 minutes sooner. Coming back, I don't have a ridiculous wait, so I still take the bus.
I could do even better if I got some wheels of some sort involved. Maybe it's time to take up skateboarding. Scooter? Bike seems like it would be too cumbersome, even if I can get one that folds up.
If the commute is mostly flat, consider Freeline skates. They take up much less space than any of the mentioned wheels; the technique is different from skateboarding but the learning curve isn't any worse.
I have discovered that I am so terrible at skateboarding and rollerblading that self-preservation requires me to stop trying.
Not in general, but I recognize your example. Walking is pleasant and active and allows me to think sustained thoughts, so it makes time 'pass' quickly. Whereas riding the subway is passive and stressful and makes me think many scattered thoughts in short time, so it makes time 'pass' slowly, making the ride seem longer. Also, if you walk somewhere in 15 minutes that probably takes about 15 minutes, but if you ride the subway for 15 minutes that probably takes more like half an hour from when you leave home to when you get to your goal.
More generally, I've noticed I tend to underestimate how much time it passes when I'm directly controlling how fast I'm going (climbing stairs, driving on an open road, reading) and overestimate it when I'm not (using an elevator, driving in congested traffic, watching a video).
Short-distance public transport is an exception: once I'm on the bus, it feels like it takes 5 minutes to get from home to the university, but it actually takes 20.
I download loads of music and audiobook and books (though it's more bothersome to read while moving) and listen to them on my commute to work, it takes me around 45 minutes commute to get to work via train system and it takes the same time to get back home. Doing this, I totally don't mind the commute. Look forward to it even since It was the only time I get to read or listen to anything.
I've just noticed that the Future of Humanity Institute stopped receiving direct funding from the Oxford Martin School in 2012, while "new donors continue to support its work." http://www.oxfordmartin.ox.ac.uk/institutes/future_humanity
Does that mean it's receiving no funding at all from Oxford University anymore? I'm surprised that there was no mention of that in November here: http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/. Is the FHI significantly worse off funding wise than it was in previous years?
Heya. I'm organizing a meetup, but to announce it here I seem to need some karma. Thanks.
Hey komponisto (and others interested in music) -- if you haven't already seen Vi Hart's latest offering, Twelve Tones, you might want to take a look. Even though it's 30 minutes long.
(I don't expect komponisto, or others at his level, will learn anything from it. But it's a lot of fun.)
I second the recommendation. I found it interesting that I enjoyed it so much despite learning almost nothing at all. Everything in the video was stuff I'd heard or thought about before, but seeing it presented in a unified, artistic, humorous fashion was very entertaining.
On the flip-side, I know almost nothing about music, was unable to understand a lot of the video, and still enjoyed it quite a bit.
I had no idea that the purpose of twelve tone was to teach people how to decontextualize musical sounds. Is listening to such music more valuable than meditation?
A Big +1 to whoever modified the code to put pink borders around comments that are new since the last time I logged in and looked at an article. Thanks!
In response to this post: http://www.overcomingbias.com/2013/02/which-biases-matter-most-lets-prioritise-the-worst.html
Robert Wiblin got the following data (treated by a dear friend of mine):
89 Confirmation bias
54 Bandwagon effect
50 Fundamental attribution error
44 Status quo bias
39 Availability heuristic
38 Neglect of probability
37 Bias blind spot
36 Planning fallacy
36 Ingroup bias
35 Hyperbolic discounting
29 Hindsight bias
29 Halo effect
28 Zero-risk bias
28 Illusion of control
28 Clustering illusion
26 Omission bias
25 Outcome bias
25 Neglect of prior base rates effect
25 Just-world phenomenon
25 Anchoring
24 System justification
24 Kruger effect
23 Projection bias
23 Mere exposure effect
23 Loss aversion
22 Overconfidence effect
19 Optimism bias
19 Actor-observer bias
18 Self-serving bias
17 Texas sharpshooter fallacy
17 Recency effect
17 Outgroup homogeneity bias
17 Gambler's fallacy
17 Extreme aversion
16 Irrational escalation
15 Illusory correlation
15 Congruence bias
14 Self-fulfilling prophecy
13 Wobegon effect
13 Selective perception
13 Impact bias
13 Choice-supportive bias
13 Attentional bias
12 Observer-expectancy effect
12 False consensus effect
12 Endowment effect
11 Rosy retrospection
11 Information bias
11 Conjunction fallacy
11 Anthropic bias
10 Focusing effect
10 Déformation professionnelle
08 Positive outcome bias
08 Ludic fallacy
08 Egocentric bias
07 Pseudocertainty effect
07 Primacy effect
07 Illusion of transparency
06 Trait ascription bias
06 Hostile media effect
06 Ambiguity effect
04 Unit bias
04 Post-purchase rationalization
04 Notational bias
04 Effect)
04 Contrast effect
03 Subadditivity effect
03 Restorff effect
02 Illusion of asymmetric insight
01 Reminiscence bump
One could try ranking biases by the size of the correlation between susceptibility-to-the-bias and damaging behavior, for example, using the correlations in http://lesswrong.com/lw/ahz/cashing_out_cognitive_biases_as_behavior/
This is totally worth a discussion post.
You may not have noticed when you posted this, but the formatting of your post didn't show up like I think you may have wanted, with the result that it's hard to read. (If you're wondering, it takes 2 carriage returns to get a line break out.)
If you intended the comment to look like it does, I apologize for bothering you.
Corrected, thank you
How do you correct your mistakes?
For example, I recently found out I did something wrong at a conference. In my bio, in areas of expertise I should have written what I can teach about, and in areas of interest what I want to be taught about. This seems to maximize value for me. How do I keep that mistake from happening in the future? I don't know when the next conference will happen. Do I write it on anki and memorize that as a failure mode?
More generally, when you recognize a failure mode in yourself how do you constrain your future self so that it doesn't repeat this failure mode? How do you proceduralize and install the solution?
For a while I was in the habit of putting my little life lessons in the form of Anki cards and memorizing them. I would also memorize things like conflict resolution protocols and checklists for depressive thinking. Unfortunately it didn't really work, in the sense that my brain consistently failed to recall the appropriate knowledge in the appropriate context.
I tried using an iOS app caled Lift but I found it difficult to use and not motivating.
I also tried using an iOS app called Alarmed to ping me throughout the day with little reminders like "Posture" and "Smile" and "Notice" to improve my posture, attitude, and level of mindfulness, respectively. This worked better but I eventually got tired of my phone buzzing so often with distracting, non-critical information and turned off the reminders.
My very first post on LessWrong was about proceduralizing rationality lessons, I think it's one of the biggest blank spots in the curriculum.
Yes, a blank spot and one that makes everything else near-useless. This needs to be figured out.
I'm not sure this applies to your particular situation, but a general solution for proceduralizing behaviors that was discussed at minicamp (and which I'd actually done before) is: Trigger yourself on a particular physical sensation, by visualizing it and thinking very hard about the thing you want yourself to remember. So an example would be if you want to make sure you do the things on your to-do list as soon as you get home, spend a few minutes trying to visualize with as much detail as you can what the front door of your house looks like, and recall what it feels like to be stepping through it, and think about "To do list time!" at the same time. (Or if you have access to your front door at the time you're trying to do this, actually stepping through your front door repeatedly while thinking about this might help too.)
And if there's some way to automate it, then of course that's ideal, though you said you don't know when the next conference will happen so that's more difficult.
Or another kind of automation: maybe you could save the bio you wrote in a Word document, and write a reminder in it to add the edits you want... or just do them now, and save the bio for future use. Then all you have to remember is that you wrote your bio already. Which is another problem, but conceivably a smaller one: I don't know about your hindbrain, but upon being told it had to write a bio, mine would probably be grasping at ways to avoid doing work, and having it done already is an easy out.
That automation makes sense, thank you. Trying to think of how to generalize it, and how to to merge it with the first suggestion.
For a problem like this, remembering for something rare in the indefinite future, the important thing is to remember at that time that you know something. At that point, if you've put it in a reasonable place, you can find it. It seems to me that the key problem is the jump from "have to write a bio" to "how to write a bio," that is, making sure you pause and think about what you know or have written down somewhere. Some people claim success with Anki here, but it doesn't make sense to me.
What most people do with bios is that they reuse them, or at least look at the old one whenever needing a new one. As Maia says, if you write an improved bio now, you can find it next time, when you look for the most recent version. But that doesn't necessarily help remember why it was an improvement. If you have a standard place for bios, you can store lots of variants (lengths, types of conferences, resume, etc), along with instructions on what distinguishes them. But I think what most people do is search their email for the last one they submitted. If you can't learn to look in a more organized place, you could send yourself an email with all the bios and the instructions, so that it comes up when you search email.
There is now fanfic about Eliezer in the Optimalverse. I'm not entirely sure what to make of it.
:)
Posting here rather than the 'What are you working on' thread.
3 weeks ago I got two magnets implanted in my fingers. For those who haven't heard of this before, what happens is that moving electro-magnetic fields (read: everything AC) cause the magnets in your fingertips to vibrate. Over time, as nerves in the area heal, your brain learns to interpret these vibrations as varying field strengths. Essentially, you gain a sixth sense of being able to detect magnetic fields, and as an extension, electricity. It's a $350 superpower.
The guy who put them in my finger told me it will take about six months before I get full sensitivity. So, what I'm doing at the moment is research into this and quantifying my sensitivity as it develops over time. The methodology I'm using is wrapping a loop of copper wire around my fingers and hooking it up to a headphone jack, which I will then plug into my computer and send randomized voltage levels through. By writing a program so I can do this blind, I should be able to get a fairly accurate picture of where my sensitivity cutoff level is.
One thing I'm stuck on is how to calculate the field strength acting on my magnets. Getting the B field for a solenoid is trivial, but with a magnetic core I'm sure it throws everything out of whack. If anyone has any links to the physics of how to approach that, I'd be much obliged.
And if you're curious about what it's like so far to have magnets in your fingers, feel free to ask.
It's all fun and games until you need to get MRI and your fingers burst into flames.
Then it's just fun.
"superpower" is overstating it. Picking up paperclips is neat and being able to feel metal detectors as you walk through them or tell if things are ferrous is also fun but it's more of just a "power" than a superpower. It also has the downside of you needing to be careful around hard-drives and other strong magnets. On net I'm happy I got them but it's not amazing.
FYI, there's no need to be careful around hard drives (except for your own safety, since they're large chunks of metal your magnet will stick to.) The platters of a modern hard drive are too high-coercivity and too well-shielded for even a substantial neodymium magnet (bigger than you can fit in a fingertip) to affect them.
Credit cards, on the other hand.
Great thinking! Once you have fully developed and trained your superpower sensitivity you can read the cards by merely brushing your hands past someone's wallet!
::deliberately failing to get the joke::
I think the issue is that the magnets will destroy the data on the credit card stripe...
Also, aren't MRI's going to be a problem?
It's not the being careful about ruining them, it's the giant magnet IN them that can fuck you up.
I'd mostly agree with that. After I finish my current project though I have some more in mind about using them as input methods, so for me they're as much toys I can experiment with as anything else.
Do you notice the accumulation of ferrous, for the lack of a better word, dust fragments?
My magnets I have for misc. projects at home quickly pick up a collection of small fragments, but maybe my world is just to closely tied to steel fabrication shops.
Not yet, though I haven't done any metalwork since I got the magnets.
This was one of the questions I asked the guy who put them in, since I'll be running into this eventually. He said that this was one of his concerns going into getting his own, as he does a lot of work in a shop, but that he has found that iron and steel filings haven't been a problem.
Is there any practical use for having magnets in your fingers? It seems like a bizarrely bad idea to me.
Besides telling if a device is live or not, not that I know of. The one major issue is that you can't have an MRI, although if I'm in a situation where I can't tell a doctor that I have them, magnets being ripped out of my fingers is the least of my worries. If need be, I could have a doctor make a small incision and take them out. And I do have to be careful not to hold on to powerful magnets for too long, or it will crush the skin in between the two magnets. Other than that though, there's no real downside. They're off to the side so it doesn't affect my grip, and once my skin finishes healing they'll be unnoticeable.
The upside for me is the qualia of sensing emfs and having them as toys to play with. I treated the decision like getting a tattoo, where my personal rule is I have to love a design for a continuous year before getting it. I haven't settled on a design long enough to get a tattoo, but I had planned on getting magnets for about a year and a half so I went ahead and did it.
I just noticed the Recent Karma Awards link in the sidebar. Has it been there for long?
At least a few months.
How do you upgrade people into rationalists? In particular, I want to upgrade some younger math-inclined people into rationalists (peers at university). My current strategy is:
incidentally name drop my local rationalist meetup group, (ie. "I am going to a rationalist's meetup on Sunday")
link to lesswrong articles whenever relevant (rarely)
be awesome and claim that I am awesome because I am a rationalist (which neglects a bunch of other factors for why I am so awesome)
when asked, motivate rationality by indicating a whole bunch of cognitive biases, and how we don't naturally have principles of correct reasoning, we just do what intuitively seems right
This is quite passive (other than name dropping and article linking) and mostly requires them to ask me about it first. I want something more proactive that is not straight up linking to Lesswrong, because the first thing they go to is The Simple Truth and immediately get turned off by it (The Simple Truth shouldn't be the first post in the first sequence that you are recommended to read on Lesswrong). This has happened a number of times.
This sounds like you think of them as mooks you want to show the light of enlightenment to. The sort of clever mathy people you want probably don't like to think of themselves as mooks who need to be shown the light of enlightenment. (This also might be sort of how I feel about the whole rationalism as a thing thing that's going on around here.)
That said, actually being awesome for your target audience's values of awesome is always a good idea to make them more receptive to looking into whatever you are doing. If you can use your rationalism powers to achieve stuff mathy university people appreciate, like top test scores or academic publications while you're still an undergraduate, your soapbox might be a lot bigger all of a sudden.
Then again, it might be that rationalism powers don't actually help enough in achieving this, and you'll just give yourself a mental breakdown while going for them. The math-inclined folk, who would like publication writing superpowers, probably also see this as the expected result, so why should they buy into rationality without some evidence that it seems to be making people win more?
To be honest, unless they have exceptional mathematical ability or are already rationalists, I will consider them to be mooks. Of course, I wont make that apparent, it is rather hard to make friends that way. Acknowledging that you are smart is a very negative signal, so I try to be humble, which can be awkward in situations like when only two out of 13 people pass a math course that you are in, and you got an A- and the other guy got a C-.
And by the way, rationality, not rationalism.
Incidentally, what exactly makes a person already be a rationalist in this case?
Taboo "rationalist". That is, don't make it sound like this is a group or ideology anyone is joining (because, done right, it isn't.)
Discuss, as appropriate, cognitive biases and specific techniques. E.g. planning fallacy, "I notice I am confused", "what do you think you know and why do you think you know it?", confirmation bias, etc.
Tell friends about cool books you've read like HPMoR, Thinking Fast and Slow, Predictably Irrational, Getting Things Done, and so forth. If possible read these books in paper (not ebooks) where your friends can see what you're reading and ask you about them.
The problem with rationality is that unless you are at some level, you don't feel like you need to become more rational. And I think most people are not there, even the smart ones. Seems to me that smart people often realize they miss some specific knowledge, but they don't go meta and realize that they miss knowledge-gathering and -filtering skills. (And that's the smart people. The stupid ones only realize they miss money or food or something.) How do you sell something to a person who is not interested in buying?
Perhaps we could make a selection of LW articles that can be interesting even for people not interested in rationality. Less meta, less math. The ones that feel like "this website could help me make more money and become more popular". Then people become interested, and perhaps then they become interested more meta -- about a culture that creates this kind of articles.
(I guess that even for math-inclined people the less mathy articless would be better. Because they can find math in thousand different places; why should they care specifically about LW?)
As a first approximation: The Science of Winning at Life and Living Luminously.
How about bringing up specific bits of rationality when you talk with them? If they talk about plans, ask them how much they know about how long that sort of project is likely to take. If they seem to be floundering with keeping track of what they're thinking, encourage them to write the bits and pieces down.
If any of this sort of thing seems to register, start talking about biases and/or further sources of information.
This is a hypothetical procedure-- thanks for mentioning that The Simple Truth isn't working well as an introduction.
I noticed a strategy that many people seem to use; for lack of a better name, I will call it "updating the applause lights". This is how it works:
You have something that you like and it is part of your identity. Let's say that you are a Green. You are proud that Greens are everything good, noble, and true; unlike those stupid evil Blues.
Gradually you discover that the sky is blue. First you deny it, but at some moment you can't resist the overwhelming evidence. But at that moment of history, there are many Green beliefs, and the belief that the sky is green is only one of them, although historically the central one. So you downplay it and say: "All Green beliefs are true, but some of them are meant metaphorically, not literally, such as the belief that the sky is green. This means that we are right, and the Blues are wrong; just as we always said."
Someone asks: "But didn't Greens say the sky is green? Because that seems false to me." And you say: "No, that's a strawman! You obviously don't understand Greens, you are full of prejudice. You should be ashamed of yourself." The someone gives an example of a Green that literally believed the sky is green. You say: "Okay, but this person is not a real Green. It's a very extreme person." Or if you can't deny it, you say: "Yes, even withing the Green movement, some people may be confused and misunderstand our beliefs, also our beliefs have evolved during time, but trust me that being Green is not about believing that the sky is literally green." And in some sense, you are right. (And the Blues are wrong. As it has always been.)
To be specific, I have several examples in my mind; religion is just one of them; probably any political or philosophical opinion that had to be updated significantly and needs to deny its original version.
My strategy is to avoid conversations of this form entirely by default. Most Greens do not need to be shown that the belief system they claim to have is flawed, and neither do most Blues. Pay attention to what people do, not what they say. Are they good people? Are they important enough that bad epistemology on their part directly has large negative effects on the world? If the answers to these questions are "yes" and "no" respectively, then who cares what belief system they claim to have?
I'm really going to try and remind myself of this more often. Most of the time the answers are "yes" and "no" and points are rarely won for pointing out bad epistemology.
Yes, like moving-the-goalposts, this is an annoying and dishonest rhetorical move.
Suppose some Green says:
Yes, intellectual precursors to the current Green movement stated that the sky was literally Green. And they were less wrong, on the whole, then people who believed that the sky was blue. But the modern intellectual Green rejects that wave of Green-ish thought, and in part identifies the mistake as that wave of Greens being blue-ish in a way. In short, the Green movement of a previous generation made a mistake that the current wave of Greens rejects. Current Greens think we are less wrong than the previous wave of Greens.
Problematic, or reasonable non-mindkiller statement (attacking one's potential allies edition)?
How much of that intuition is driven by the belief that Bluism is correct. If we change the labels to Purple (some Blue) and Orange (no Blue), does the intuition change?
If, after realizing an old mistake, you find a way to say "but I was at least sort of right, under my new set of beliefs," then you are selecting your beliefs badly. Don't identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.
For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they're just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.
There's a further complicating factor: often when this happens, both modern Blues and Greens won't exactly correspond to historical Blues and Greens even though both are using the same terms. Worse, when the entire region of acceptable social policy has changed, sometimes an extreme Green or Blue today might be what was seen as someone of the other type decades ago.
Yes, the first wave of a movement may have many divergent descendents, which end up on different sides of a current political dispute. And the direct-est descendent might be on the opposite side of the political divide from what we would predict the first-wave proponents would adopt. But for that to happen, there needs to be significant passage of time.
By contrast, if the third wave of a movement cannot point to an immediately prior second wave that actually believed the position criticized (and which the third wave has already rejected), then Villiam_Bur's moving-the-goalposts criticism has serious bite, to the point that an outsider probably should not accept the third wave as genuinely interested in rational discussion or true beliefs.
And here we were having a very nice discussion without pointing out any potentially controversial/mindkilling examples. Using the phrasing of second and third wave doesn't make it less subtle or less potentially mindkilling.
In the specific case which you are not so obliquely referencing, there's a pretty strong argument that much of thirdwave feminism has strands from first and second wave, while also agreeing on the most basic premises.
It is also worth noting in this context, that movements (wherever they are politically) aren't in general after rational discussion or true beliefs but at accomplishing specific goal sets. You will in any diverse movement find some strains that are more or less interested in rational discussion, but criticizing a movement for its failure to embody rationality is not by itself a very useful criticism.
Um, I had not linked the parent of your comment to any specific movement until you pointed out the possible existence of such a link ...
I'm pretty sure that's what TimS was talking about given his use of the phrases "second wave" and "third wave". It is especially clear because if one was going to be talking about a generic example and using the term wave, one would in the same context have likely discussed the first wave v. the second wave. The off-by-one only makes sense in that specific historical context.
Oppositely, the second and third waves immediately screamed 'feminism' to me, but I couldn't assemble the rest of the analogy. The third wave has plenty of legitimate differences and similarities with both the first and second waves. I'm still not sure what TimS was getting at.
It is the big obvious current example where the ideological battle is between "second wave" and "third wave" and the first wave is barely mentioned. I encounter it in relation to the UK social justice Twittersphere, which is tangential to the more Kankri Vantas stretches of Tumblr. (Or, more accurately, the Porrim Maryam stretches.)
Edit: Can anyone think of another field described as having numbered waves where the battle was between second and third?
Ideologies and theo-philosophical schools are rarely if ever defined precisely enough to exclude true facts about the world or justifications for genuinely good ideas. They're more collections of rules of thumb, methods, technical terms and logics. If mathematically formulated scientific theories are under-determined then ideologies are so, but ten-fold. The problem of inferential distance when it comes to worldviews isn't really about shear decibels of information that need to be communicated. It's that the interlocutors are playing different games and speaking different languages. And I suspect most deconversions are more like picking up a new language and forgetting your old one, than they are the product of repeated updates based on the predictive failures of the old ideology/religion. It's a pseudo-rational process which is why it doesn't reliably occur in just one direction.
Back to your point: since people have egos, memetic complexes usually have self-perpetuating features and applause lights don't constrain future experience it makes sense that if anything is held constant it will be Greens being really sure they are right. That's non-optimal and definitely irksome to people like all of us. It's inefficient because we're spending a resources on constructing post-hoc justifications for how the real Green answer is the true one and the corrections to our model may not be more curve-fitting. That is, what ever beliefs and assumptions that led the Greens to be wrong in the first place may still be in place. Plus, it is kind of creepy in a "we've always been at war with Eurasia" kind of way.
But on the other hand it is sort of okay, right? At least they're updating! You can think of academic departments of philosophy, religion, law and humanities as just the cost of doing business to mollify our egos as we change our minds. And changing people's minds this way is almost certainly much easier than making them convert to the doctrine of the hated enemy and engage in extend self-flagellation. It's a line of retreat.
Making the modern Green cop to the literal beliefs of her intellectual ancestor seems like an exercise in scoring points, not genuine persuasion. Who needs credit? The curve fitting is still an issue but you might be better off trying to make room for better beliefs and assumptions within the context of Green thought. Especially since it isn't obvious the opposing movement did anything other than get lucky.
A few out-there scholars think Descartes was an atheist. He almost certainly wasn't. But there is a reason they suspect him even though much of the Meditations is an extended argument for the existence of God. The thing is that the practical upshot of his non-empirical argument for God is that we should completely abandon the Christian-Aristotelian-Scholastic tradition and use our senses to discover what the world is really like. "The sky is certainly Green and this proves that the ideal method for discovering the color of things is visual examination and use of a spectrometer."
Ideological multi-lingualism is a crucial skill; I'd like to hear ideas for cultivating it.
This process can be a stage in the process of leaving the Greens-- I've heard stories of deconversion which sounded a lot like that.
This sounds very much like religion - I'd be interested in hearing about a solid non-religious example.
Let's avoid object level examples until we resolve how to distinguish this dishonest rhetorical move from honest updates on the low validity of prior arguments now abandoned. Otherwise, we get bogged down in mindkiller without any general insight into how to be more rational.
But aren't we all agreed the specific examples are super-helpful for understanding a general phenomenon?
Karl Popper came up with the Falsifiability Principle as a direct response to watching Marxists, Freudians, and others do exactly this.
I think there's a related rhetorical trick that's something like redefining the applause lights, or brand extension.
Greens believe the sky is green. I want them to believe the entire world is green. I will use their commitment to sky greeness and just persuade them it means something slightly different.
Clouds are kind of like the sky so should really be considered green if you're being fair about things. And rain is in the sky, who are you to say it's not green? Rain falls on the ground, which is therefore also part of the sky.
After a while, you can persuade people that, since the sky is green, obviously rocks are green.
This explanation isn't great but more practical examples are somewhat mindkilling.
Some selection effects: I wonder if the perceived solidarity of most identity-heavy groups is due to vague language that easily facilitates mind projection within the group. Surviving communities will have either reduced their exposure to fracturing forces, or drifted towards more underspecified beliefs as a result of such exposure. I think religious strains fall very nicely into these two groups, but I'm not so sure about political groups.
Being specific is a good rationalist tool and a bad strategy for social relations. The more specific one is, the fewer people agree with them. The best social strategy is to have a few fuzzy applause lights and gather agreement about them.
I'll try to find a less sensitive political example. Some people near me are fans of "direct democracy"; they propose it as a cure for all the political problems. I try being more specific and ask whether they imagine that people in the whole country will vote together, or that each region will vote separately on their local policies... but they refuse to discuss this, because they see that it would split their nicely agreeing group into disagreeing subgroups. But for me this distinction seems very important in predicting the consequences of such system.
Miles Brundage recently pointed me to these quotes from Ed Fredkin, recorded in McCorduck (1979).
On speed of thought:
On whether advanced AIs will share our goals:
On basement AI:
On the risk of bad guys getting AI first:
On trying to raise awareness of AI risks:
Later in that chapter, McCorduck quotes Marvin Minsky as saying:
...which sounds eerily like a pitch for MIRI.
Unfortunately, Minsky did not then rush to create the MIT AI Safety Lab.
There's a scam I've heard of;
Mallet, a notorious swindler, picks 10 stocks and generates all 1024 permutations of "stock will go up" vs. "stock will go down" predictions. He then gives his predictions to 1024 different investors. One of the investors receive a perfect, 10 out 10 prediction sheet and is (Mallet hopes) convinced Mallet is a stock picking genius.
Since it's related to the Texas sharpshooter fallacy, I'm tempted to call this the Texas stock-picking scam, but I was wondering if anyone knew a "proper" name for it, and/or any analysis of the scam.
Derren Brown demonstrated this scam on TV and called it The System. That might help you track down a name.
Related to "magic phrases", what expressions or turns of phrase work for you, but don't work well for a typical real-world audience?
I tend to use "it's not magic" as shorthand for "it's not some inscrutable black-boxed phenomenon that defies analysis and reasoning". Moreover, I seem to have internalised this as a reaction whenever I hear someone describing something as if it were such a phenomenon. Using the phrase generally doesn't go down well, though.
What if it really was like that?
The true meaning of moral panics
It seems fairly believable that an oppressed underclass that is intentionally deprived of education and opportunity will, on average, be cruder, less intellectually inclined, have less wealth and status, and more prone to failing at life in various ways due to the lack of a support structure. This is true of any group, whatever their intrinsic nature, simply due to the act of discrimination.
I remember once reading an essay about Jews in(IIRC) Rudyard Kipling's works, where they're portrayed in pretty appalling ways, while all sorts of other groups are portrayed positively. The author came to the conclusion that acting in cowardly and profiteering fashion was a survival tactic created by anti-semitic laws, and that Kipling was probably just conveying the reality of the time. (I'm not enough of an expert to judge the truth of this, but it seemed reasonable)
Sure. Also see the recent follow-ups to the Stanford marshmallow experiment. It sure looks like some of what was once considered to be innate lack of self-restraint may rather be acquired by living in an environment where others are unreliable, promises are broken, etc.
Possibly, but the followup only tells us that, at least in the short term, kids will be less likely to delay gratification from specific individuals who have proven to be untrustworthy (and the protocol of that experiment kind of went for overkill on the "demonstrating untrustworthiness" angle.)
It might be that children become less able to delay gratification if raised in environments where they cannot trust promises from their guidance figures, but the same effect could very easily be caused by rational discounting of the value of promises from individuals who have proven unlikely to deliver on them.
Your argument sounds perfectly reasonable to me. Yet I would advise against reversing stupidity. Just because there is a systematic influence that makes it worse for the opressed people, it does not automatically mean that without that influence all the differences would disappear. Although it is worth trying experimentally.
{EDITED to clarify, as kinda suggested by wedrifid, some highly relevant context.}
This comment by JoshuaZ was, when I saw it, voted down to -3, despite the fact that it
A number of JoshuaZ's other recent comments there have received similar treatment. It seems a reasonable conclusion (though maybe there are other explanations?) that multiple LW accounts have, within a short period of time, been downvoting perfectly decent comments by JoshuaZ. As per other discussions in that thread [EDITED to add: see next paragraph for more specifics], this seems to have been provoked by his making some "pro-feminist" remarks in the discussions of that topic brought up by recent events in HPMOR.
{EDITED to add...} Highly relevant context: Elsewhere in the thread JoshuaZ reports that, apparently in response to his comments in that discussion, he has had a large number of comments on other topics downvoted in rapid succession. This, to my mind, greatly raises the probability that what's going on is "playing the man, not the ball": that the cause of the downvotes isn't simply that many LW participants disagree strongly with me about the merits of the individual comments.
It seems to me that this is a kind of abuse that needs to be stopped. To be clear, I don't mean abuse of JoshuaZ, who I bet is perfectly capable of handling it. I mean abuse of LW. Specifically, it appears to be a concerted attempt to shape discussions here not by rational argument, nor even by appeal to emotion, but by intimidation.
(I suppose I should mention an amusing contrary hypothesis to which I attach very low probability. Perhaps the downvotes are from friends of JoshuaZ, who hope to attract sympathy upvotes and will change their own downvotes to upvotes in a week or two when no one's watching any more.)
I would address this to the LW admins by PM if I knew who they are, but the only person I know to be an LW admin is Eliezer and I believe he's very busy at the moment.
{EDITED to add ...} One other remark, just in case of suspicions. I am not JoshuaZ, nor do I have any idea who he is outside LW, nor (so far as I know) have I had any interaction with him outside LW, nor have I had enough in-LW interaction with him to regard him as an ally or a friend or anything of the kind. There is no personal element to any of what I have said.
{Totally irrelevant remark: The squiggly brackets are because [this sort] which I'd normally use for noting what I've edited interacts badly with the Markdown hyperlink syntax.}
The escape character, which solves this and various other potential problems, is "\".
I downvoted that comment and encourage others to feel free to downvote any comment of a type they would prefer not to see on less wrong. Most (but not quite all) cases of people using social politics to force their preferences onto others are things I wish to see left of. This includes but is not limited to sex politics. Being 'on topic' is no virtue when the topic itself is toxic.
At your prompting I have downvoted the parent of the comment in question. To whatever extent a comment is justified by being an answer to a question the asking of said question must assume responsibility. For the same reason I have no objection if others choose to downvote my own contributions to that thread for what could be considered "Feeding". Kawoomba has a good point.
Note: This is a different issue to the systematic downvoting of a user on all subjects (sometimes referred to as 'karma assassination'). That is universally considered an abuse of the system. However the example you give only demonstrates your subjective disagreement with the evaluations of some others regarding the desirability of a particular comment.
You have defined your campaign by references to "this sort of abuse" where 'this' refers to comments like the example comment being downvoted. As such I cannot support it. People are allowed to not like stuff and vote it down. If you had instead made your campaign to be against karma-assassination then I would support it. I myself have lost several thousand karma in bursts like that. I suggest revision.
It should be mentioned explicitly here -- as it has been in the discussion in the other thread, and as I know you have seen since you replied to it -- that JoshuaZ reports precisely the sort of "karma assassination" behaviour you describe, in connection with the same topic. It's because of that context that I think it likely that the highly negative score of his comment is at least partly the result of punitive downvoting aimed at him rather than at his comment specifically.
I shall amend my comment upthread to mention this context, which I agree is relevant.
Yes, it may be reasonable to decide that a particular topic is toxic and try to discourage all posting on that topic. (Though I think occasional comments arguing for dropping the subject would be a far better way of doing that than flinging downvotes around.) But that is plainly not what was happening, because only comments on one side of the issue were sitting at gratuitously low values relative to, for want of a better term, their topic-agnostic merit. (Your own description of your own actions is some evidence for this: you had downvoted that comment but not its parent.)
I am not sure why you think the word "campaign" is appropriate, though I can see why you might find it rhetorically convenient. I see what looks to me like a systematic attempt to stop LW participants expressing certain sorts of opinion, through intimidation rather than argument, I think that's bad, and I have said so a couple of times and expressed willingness to help technically if others agree with me. That's a "campaign"?
If the information needed to take action against this sort of abuse is difficult to do anything with because it requires grovelling through whatever database underlies LW, I hereby volunteer (if told it would be useful by someone with power to use it) to make whatever software enhancements to the LW code are required to make it easy.
(I have no experience with the LW codebase but am an experienced software person. Getting-started pointers would be welcome if anyone takes me up on that offer.)
On why playing hard to get is a bad idea, and why a lot of women do it.
This was something I was meaning to post about in some of the gender discussions, but I wasn't sure that a significant proportion of men were still put off by women who were direct about wanting sex with them-- but apparently, it's still pretty common.
There is a difference between "I want sex with you specifically (because you attract me)", and "I want sex with anyone (and you are the nearest one)". For me, the former would feel nice, but the latter would feel... creepy.
This may be another situation of not being specific: when women report that "men were put off by them being direct about wanting sex with them", I don't know which one of these situations it was. Also, it depends on context; there is a difference between getting a sex offer from a friendly person in a romantic situation, or getting a sex offer from an unknown heavily drunk woman at a disco (happened to me, and yes I was put off). These details may change the situation, and are usually not reported, because of course the goal of report is to make the other people seem horrible.
Another possibility is that if a woman makes a courteous and straightforward statement of interest, there's no guarantee that the man is likewise interested, but she might interpret this as being wrong for being straightforward rather than that there was no way he was going to reciprocate.
From the comments, and I admit there was less than I thought there was going to be:
**
**
**
This one might be evidence-- it depends on what she meant by "everything paid off when I found my boyfriend". I'm inclined to think that her honesty didn't work a few times.
**
Being the one who approaches has many advantages, but it comes with a cost -- one must learn to deal with rejection. There is a difference between knowing, generally, "my attractivity is probably average", and being specifically rejected by one specific sympathetic person who seemed to be interested just a while ago, but probably just wanted to talk.
Interpreting rejection as "these men are afraid of a honest / courageous woman" can help protect the ego. It could also be why the men said it -- to avoid an offense, a confrontation. (Women also say various things that don't make much sense, when they mean: "I don't consider you attractive.") I mean, if an extremely attractive woman would approach those men, a lot of them yould probably say yes and consider themselves lucky. (This is an experimentally testable prediction!)
Oh neato. The class notes for a recent class by Minsky link to Intelligence Explosion: Evidence and Import under "Suggested excellent reading."
Would be good to have a single central place for all CFAR workshop reviews, good and bad. Here's two:
How about the Wiki?
Mine, on LW.
Any LWers in Seattle fancy a coffee?
I'm at UW until the end of the month, so would prefer cafes within walking distance of the university.
I've been thinking about tacit knowledge recently.
A very concrete example of tacit knowledge that I rub up against on a regular basis is a basic understanding of file types. In the past I have needed to explain to educated and ostensibly computer-literate professionals under the age of 40 that a jpeg is an image, and a PDF is a document, and they're different kinds of entities that aren't freely interchangeable. It's difficult for me to imagine how someone could not know this. I don't recall ever having to learn it. It seems intuitively obvious. (Uh-oh!)
So I wonder if there aren't some massive gains to be had from understanding tacit knowledge more than I do. Some applications:
What do you think or know about tacit knowledge, LessWrong? Tell me. It might not be obvious.
That isn't the standard use of "tacit knowledge." At least it doesn't match the definition. Tacit knowledge is supposed to be about things that are hard to communicate. The standard examples are physical activities.
Maybe knowing when to pay attention to file extensions is tacit knowledge, but the list of what they mean is easy to write down, even if it is a very long list. Knowing that it valuable to know about them is probably the key that these people were missing, or perhaps they failed to accurate assess the detail and correctness of their beliefs about file types.
Unfortunately, everything I know about tacit knowledge is tacit.
How do you know that?
Not everything I know about what I know about tacit knowledge is tacit!
This conversation just metacitasized.
It's okay, I'll show myself out.
Sir, you are wrong on the internet. A JPEG is a bitmap (formally, pixmap) image. A PDF is a vector image.
The PDF has additional structure which can support such functionality as copying text, hyperlinks, etc, but the primary function of a PDF is to represent a specific image (particularly, the same image whether displayed on screen or on paper).
Certainly a PDF is more "document"-ish than a JPEG, but there are also "document" qualities a PDF is notably lacking, such as being able to edit it and have the text reflow appropriately (which comes from having a structure of higher-level objects like "this text is in two columns with margins like so" and "this is a figure with caption" and an algorithm to do the layout). To say that there is a sharp line and that PDF belongs on the "document" side is, in my opinion, a poor use of words.
(Yes, this isn't the question you asked.)
I'm not sure I want to get into an ontological debate on whether a PDF is a document or not, but I believe the fact that it's got the word "document" in its name and is primarily used for representing de facto documents makes my original statement accurate to several orders of approximation.
Uh-oh indeed. Like most statements involving the word "is", this is probably one of those questions that should be dissolved. Thus I will ask:
What do you mean when you say document? I.e. what are the characteristics that a document has which a JPEG file does not, and which a PDF does have? Why is it wrong for something that is an image to also be a document?
This seems to be actively running away from the point. Also, see the other response re: my lack of interest in this particular ontological discussion.
In my example, there's also a concrete reason to distinguish between images and documents. The image is going to be embedded on a webpage, where people will simply look at it. Meanwhile, the document is going to be printed off as an actual physical document. Their respective formats are generally optimised for these different purposes.
I'll try: You don't need OCR to get the words out of the document. An image is just dots and/or geometric shapes. (Which would make a copy-protected PDF not a document.)
I've been talking to some friends who have some rather odd spiritual (in the sense of disorganised religion) beliefs. Odd because its a combination of modem philosophy LW would be familiar with (acausal commuication between worlds of Tegmark's level IV multiverse) ancient religion, and general weirdness. I have trouble pointing my finger at exactly what is wrong with this reasoning, although I'm fairly sure there is a flaw, in the same way I'm quite sure I'm not a Boltzmann brain, but it can be hard articulating why. So, if anyone is interested, here is the reasoning:
1) Dualism is wrong, due to major philosophical problems as well as Occam's razor
2) I think therefore I am, so I know that the 'mental world' exists.
3) Therefore Idealism is true, the mental world exists but the physical is just an illusion
4) In response to 'so why can't you fly?' the answer is a lack of mental discipline: after all, its hard to control your thoughts
5) If two different people existed in the same universe, there is no reason why they would perceive the same illusions.
6) Therefore, each universe consists of one conscious observer and their illusory reality
7) But Tegmark's level IV multiverse is true, so we can acausaly communicate between worlds, in fact all conversations are actually acausal communication between worlds.
8) This also implies there is reincarnation, of a sort - there is no body to die, so you just construct a new illusory reality.
From here on it gets into more standard 'spiritual' realms, although I did find it amusing when my friend told me that there are at least aleph-2 gods.
I should state that these beliefs are largely pointless, in that its not obvious that they actually influence any decisions the believers make, and that they do seem to make people happy without any major downsides.
I should also make it clear that I don't believe this, because I wouldn't want to lose status as a rationalist by believing in something unpopular!
TL;DR
To a large extent, this boils down to: how do I distinguish between the hypothesizes that the universe is lawful, and the hypothesis that the universe is determined by my beliefs, and I believe it to be lawful.
How do you know what you claim to know? (Okay, not you, but whoever said this.) Do you have any reproducible experimental proof of whatever violation of physical laws using mental discipline?
Isn't it suspicious that undisciplined thoughts are enough to create an illusion of physical reality perfectly obeying the physical laws, but are unable to violate the laws? That sounds to me like speaking about an archer who always perfectly hits the middle of the target, but is unable to shoot the arrow outside of the target, supposedly because he is too clumsy. I mean, isn't hitting the center of the target more difficult that missing the target? Wouldn't creating a reality perfectly obeying the laws of physics all the time require more mental discipline than having things happen randomly?
I am sure there can be dozen ad-hoc explanations, I just wanted to show how it doesn't make sense.
So, if you get killed, your mental discipline will improve enough to let you create new reality you can't create now? Interesting...
It looks like you and your friends have rediscovered Lebniz's monadology. Leibniz believed that only minds were real, matter as distinct from minds is an illusion, and minds do not interact causally, but they seem to share a same "reality" by virtue of a "pre-established harmony" between their non-causally related experiences. This last part can perhaps be reexpressed in modern terms as acausal communication.
I guess the fact that I lack mental discipline is also the reason that I lack mental discipline, and the reason that lacking mental discipline causes me to lack mental discipline, too.
Sorry, to clarify, are you saying that the reasoning is circular and thus faulty?
Thing about the mental health is that it is circular, in that there are vicious cycles. If I have mental discipline, I can discipline myself to practice discipline more.
areyoufuckingkiddingme.jpg
Ok, I know this topic is unimportant compared to many other things, such as FAI and HPMOR, but there's no need to be rude.
Kant you have a sense of Hume about it?
Where do your friends get this stuff? Did they read the Sequences on LSD or something? Do they do anything differently in everyday life on account of it (besides talking about it)?
How did you get the belief that it is lawful?
I am surprised no one else has brought up the LW party line: consequentialism.
What is the alternative?
What is the consequence of your decision?
Probably the alternative is that someone else donates sperm. Either way, they raise a child that is not the husband's. If creating such a life is terrible (which I don't believe), is it worse that it is your child than someone else's? Consequentialism rejects the idea that you are complicit in one circumstance and not the other.
There are other options, like trying to convince them not to have children, or to get a donation from the husband's relatives, but they are unlikely to work.
If the choice is between your sperm or another's, then, as Qiaochu says, the main difference to the child is genes of the donor. Also, your decision might affect your relationship with the couple.
Assuming you don't have any particular reason to expect that this couple will be abusive, it's more ethical the better your genes are. If you have high IQ or other desirable heritable traits, great. (It seems plausible to anticipate that high IQ will become even more closely correlated with success in the future than it is now.) If you have mutations that might cause horrible genetic disorders, less great.
The child is wanted, so if they don't actually neglect it it'll grow up fine.
Note that if you donate sperm without going through the appropriate regulatory hoops as a sperm donor (which vary per country), you will be liable for child support.
What can possibly be unethical about it? You are the only one who is vulnerable, since you might be legally on the hook for child support.
It creates a child who will not be raised by their biological father.
Unlikely in this context, since they are much wealthier than I. I doubt they would want to share custody with me in exchange for my pittance of a salary.
They might die and the child has still rights against you.
What's the specific problem this would cause?
http://en.wikipedia.org/wiki/Cinderella_effect
Questions about the validity of the Cinderella effect aside, the OP knows the couple and can probably make a more informed judgement about this.
Of course, you can't tell this perfectly. But if the OP is anything more than casual acquaintances with the couple, I would say specific evidence probably overpowers the general case.
Has this been demonstrated in adoptive parents, though? Having only adopted children seems as though it might bias things in a different direction.
Given that the child won't exist if you say no, it's hard to assert that they'd be worse off if you decline. Just make sure you don't get too clingy.
Seeking Educational Advice...
I imagine some LW user have these questions, or can answer them. Sorry if this isn’t the right place (but point me to the right place please!).
I’m thinking of returning to university to study evolution/biology, the mind, IT, science-type stuff.
Are there any legitimate way (I mean actually achievable, you have first-hand experience, can point to concrete resources) to attend an adequate university for no or low-cost?
How can I measure my aptitude for various fields (for cheap/free)? (I did an undergrad degree in education which was so easy I don't know if I could make the grades in a demanding field).
My first undergrad degree (education) was non-science, so should I go back for another undergrad degree, or try to fill gaps on my own and do a post-grad in something with science?
I've started investigating free online education (lesswrong, edx, coursera, etc) but I have concerns: don't I need credentials? Don't I need classmates/colleagues/collaborators to help teach me, motivate me, and supply me with equipment? How do I know if I really understand the material? How do I address these concerns?
p.s. – I’m all for “munchkin” style answers/solutions to these problems, so long as they are actually feasible
Do you care about the piece of paper? If not, you can likely attend courses in the literal sense - just show up for the lectures - without paying anything at all. Old textbooks are cheap, if you want problem sets, and you almost certainly do - I strongly opine that you cannot learn anything even remotely math-oriented without doing problems. But no rule says you have to do the same problems the others in the class are doing.
Clearly, this is not the method for you if you need a lot of feedback and guidance, nor if you want the credential in addition to the knowledge.
Mostly depends on what languages you speak fluently, what countries you can obtain visas for, your willingness to relocate to said countries and your plans on what you'll do with the "science-type stuff". If you want advice, edit your post accordingly. Most of the answers will come out to public colleges in your home state, or Europe. Or plumbing.
Are you wanting a degree, or are you wanting education?
If you're just wanting the information, the university in question may permit low-cost auditing
Get a textbook of the appropriate level on the subject that has exercises and the correct answers to them, read the book, then do the exercises and see what you come up with? If it's math or physics, you should be able to tell by yourself whether your solutions resemble the example solutions in the text, seem to make sense and come up with the correct answers.
I don't know how well this will work with evolutionary biology or cognitive science. If you want to include philosophy in the "mind" part, it's my understanding that you need to be a trained academic philosopher to reliably tell fancy garbage and acceptable academic philosophy apart, so the approach probably won't work there.
After reading a couple of introductory textbooks, try to find grad students in the field in online chats and ask them about the stuff to gauge how well you've understood it. You can probably find plenty of math and computer science literate people on Lesswrong to bounce stuff off of.
Also, do you actually know you need to attend lectures to learn things, or are you just planning to do this because attending lectures is what people who get educated are supposed to do in the standard narrative? I'm pretty much incapable of following spoken academic lectures myself, and basically learn most everything by reading. If I wanted to get an education, I'd just go for a big stack of textbooks and a good note-taking system and ignore live lectures entirely at least on the undergrad level.
Of course, I just go to any university in my city and they don't cost anything.
Whether or not you need credentials depends on your goals. Yudkowsky started SI/MIRI without any credentials.
When it comes to programming questions that I face as part of my university studies I go to StackOverflow.
Depends on your ability to self motivate.
Depends on whether you want to do something that needs equipment.
If you can remember the Anki cards about a topic it's likely that you understand the topic. But more importantly, what's your goal? What do you want to be able to do with your "understanding of the material"?
I went to a college in the United States where admissions are need-blind (they don't consider how much financial aid you'll need in their decision to admit you) and that offers full-need aid (once admitted, they will meet any financial need you demonstrate). I was an international student, so the aid was not in the form of a loan, but a straight-up grant. I basically ended up paying nothing to go to a college that normally charges $60k+ a year. So if you're not American, this is a possibility. If you are American, I understand that most (all?) of the financial aid is in the form of federal loans, which you may or may not want to incur.
Wikipedia says there are only seven US universities that offer full need-blind aid to international students. There are many more that are need-blind and full-need for US students, although this will probably involve loans. That Wikipedia page also lists four non-US universities that offer need-blind and full-need aid to all applicants. If you are American, applying to one of those may be a better bet, because you might get a grant instead of a loan. I've heard good things about the National University of Singapore.
Is there a way of making precise, and proving, something like this?
For any noisy dynamic system describable with differential equations, observed through a noisy digitised channel, there exists a program which produces an output stream indistinguishable from the system.
It would be good to add some notion of input too.
There are several issues with making this precise and avoiding certain problems, but I suspect all of this is already solved so it's probably not worth me going into detail here. In the unlikely event this isn't already a solved problem, I could have a go at precisely stating and proving this.
I don't completely understand what you mean (in particular, I would really like you to be more specific about what you mean by "noisy" and "indistinguishable"), but this looks like it shouldn't be true on cardinality grounds. There should be uncountably many possible distinguishable noisy behaviors of a dynamical system.
By "indistinguishable" I mean some sort of bound on the advantage of any algorithm trying to tell the two apart. I think if I try to answer on "noisy" without knowing more about what you need specified I won't answer your question - I'm thinking of some sort of continuous equivalent to the role that noise plays in a Kalman filter.
The cardinality thing is a big problem - if the "system" is a single uncomputable real number that doesn't change, from which we take multiple noisy readings, then for any program that tries to emulate it, there is a distinguishing program whose advantage approaches 1 as the number of readings go up.
It still feels like there must be something like this that we can prove!
Article discussing how the cost of copper has gone up over time as we've used more and more of the easily accessible, high percentage ores. This is another example of a resource which may contribute to Great Filter considerations (along with fossil fuels). As pointed out in the article, unlike oil, copper doesn't have many good replacements for a lot of what it is used for.
That said, I suspect that this is not a major aspect of the Filter. If the cost goes up, the main impact would be on consumer goods which would become more expensive. That's unpleasant but not a Filter event. It also isn't relevant from the standpoint of resources necessary to bootstrap us back up to the current tech level in event of a major disaster since there will be all sorts of nearly pure copper that could be scavenged from the remains of civilization.
This may however be a strong argument for either finding new copper replacements (possibly novel alloys), or for the development of asteroid mining which will help out with a lot of different metals.
Thoughts? Does this analysis seem accurate?
I need help finding a particular thread on LW, it was a discussion of either utility or ethics, and it utilized the symbols Q and Q* extensively, as well as talking about Lost Purposes. My inability to locate it is causing me brain hurt.
That's hard, because search engines have been dumbed down to the point where you can't google for a literal 'Q*'... A local search turned up http://lesswrong.com/lw/1zv/the_shabbos_goy/ as having one use of 'Q*' and bringing up 'lost purposes'.
Probably made even more difficult because I misremembered the letter. It was G*, and the article was The Importance of Goodhart's Law. It suddenly came back to me in a flash after seeing your reply, so thanks!
I was reading http://slatestarcodex.com/ and I found myself surprised again, by Yvain persuasively steelmanning an argument that he doesn't himself believe in at http://slatestarcodex.com/2013/06/22/social-psychology-is-a-flamethrower/
It's particularly ironic because in that very post, he mentions:
Which seems to be what I am falling for. He outright says:
So to sum up, here is my experience:
1: Yvain: "Here are some arguments. I don't fully believe most of them."
2: I start reading.
3: Michaelos: "Huh. All of these seem to be somewhat well reasoned arguments, there are links, and I can follow the logic on most of them."
4: At some point, I forget the "Yvain doesn't believe this." Tag.
5: I then read his summary which points out that these also have entirely opposite summaries which are also justified.
6: I find myself flabbergasted that I've made the same mistake about Yvain's writing again.
Based on this, I get the feeling I should be doing something differently when I read Yvain's articles, but I'm not even sure what that something is.
you should probably update towards "being convincing to me is not sufficient evidence of truth." Everything got easier once I stopped believing I was competent to judge claims about X by people who investigate X professionally. I find it better to investigate their epistemic hygiene rather than their claims. If their epistemic hygiene seems good (can be domain specific) I update towards their conclusions on X.
I'm looking for good, free, online resources on SQL and/or VBA programming, to aid in the hunt for my next job. Does anyone have any useful links? As well, any suggestions on a good program to use for SQL?
What do you mean by "a good program to use for SQL?" A database engine to run queries in? A command line or GUI client for connecting to such a database? Something else entirely.
For what it's worth if you're looking for a database engine, my recommendation is Postgres. Free, open source, and a lot stricter than MySQL, even if you make MySQL as strict as you possibly can.
As for learning, I don't know any tutorials that are still around nowadays. I do recommend if you're learning it, to actually build something where you need to use queries.
Toy example: Build a weblog.
This should take you through exercises from very basic and easy statements through to some more advanced topics (grouping etc), and I find using a skill incredibly valuable to learning and internalising that skill. My first computer program was a blog, and while it was a disaster and a mess in many ways, I learned (or internalised) a lot about programming, and a lot about SQL in the process.
I answered as close as I can remember, but I think touch typing is more of something I kind of picked up as time went by, rather than something I specifically learned at one point in time. I remember pushing myself to practice touch typing at various points, but the general recollection I have is that I didn't really practice in a focused, systematic way, and yet now I can type this without needing to look at my keyboard (and in fact, when I look at my keyboard I'll be likely to make more mistakes).
So I probably picked it up in my early twenties with a lot of typing of homework and essays and posts on forums.
By “touch-typing” do you mean typing without looking at the keyboard, that and typing using all ten fingers, or that and using the formal start-with-your-fingers-on-the-home-row techniques?
For consistency with previous results, answer using your best guess as to what I mean. (Jura V nfxrq, V zrnag gur jubyr ubzrebj ohfvarff ohg V qvqa'g ernyvmr gung gbhpu-glcvat vf glcvpnyyl qrsvarq va gur oebnqrfg frafr lbh zragvba.)
I replied randomly
I personally regard this entire subject as a memetic hazard, and will rot13 accordingly.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
Gur pbaprcg vf rkcynvarq nf n pbagenfg sebz gur pbairagvbany ivrj bs Pybfrq Vaqvivqhnyvfz, va juvpu gurer ner znal crefbaf naq gur Ohqquvfg-yvxr ivrj bs Rzcgl Vaqvivqhnyvfz, va juvpu gurer ner ab crefbaf.
V nfxrq vs gurer jrer nal nethzragf sbe Bcra Vaqvivqhnyvfz, be whfg nethzragf ntnvafg Pybfrq naq Rzcgl Vaqvivqhnyvfz gung yrnir BV nf gur bayl nygreangvir. Vpbcb Irggbev rkcynvarq vg yvxr guvf:
Gur rrevrfg cneg nobhg gur Snprobbx tebhc "V Nz Lbh: Qvfphffvbaf va Bcra Vaqvivqhnyvfz" vf gung gur crbcyr va gung tebhc gerng gur pbaprcg bs gurer orvat bayl bar crefba gur fnzr jnl gung Puevfgvnaf gerng gur pbaprcg bs n Tbq gung jvyy qnza gurve ybirq barf gb Uryy sbe abg oryvrivat va Uvz. Vg'f nf vs ab bar va gur tebhc ernyvmrf gur frpbaq yriry vzcyvpngvbaf bs gurer abg orvat nalbar ryfr, be znlor gurl qba'g rira pner.
I think it is true. Self awareness is not hardware (wetware, whatever-ware) dependent. Just upload yourself and everything would be just fine. You'll be on two places at the same time, but with no communications between your instances, the old and the new one.
The same situation here, only that you have more than one natural born upload. Many billion, in fact.
The naturalism leads to this (frightening) conclusion.
Doesn't that black box the process of uploading?
I am not sure what do you mean by this blackboxing.
But to think, that the process of consciousnesses will work inside a computer, but will not work inside some other human skull - is naive.
It should work either on both places or nowhere.
People respond to this with "My memories are crucial, they are my unique identifier!". Well, you can forget pretty much everything and you will feel the same way. Besides, at every moment that you are self aware, you are remembering different little pieces of everything, doesn't matter what exactly. Might be a memory of a total solar eclipse, millions have almost the same short movie in their heads. Nothing unique here,
The consciousnesses is a funny algorithm, running everywhere. This is why, you should care about the future and behave accordingly at the present time.
Black boxing is when a complicated process is skipped over in reasoning. You supposed that mind uploading was possible for the sake of argument, to support a conclusion outside of the argument.
I see no reason, why uploading would be impossible. As I see no reason, why interstellar traveling would be impossible.
I have no idea how to actually do both, but that's another matter.
If the naturalistic view is valid, it is difficult to see a reason why those two would be impossible. But if the Universe is a magic place, then of course. It's possible that they are both impossible due to some spell of a witch, or something.
Still, I do assign a small probability to the possibility, that the consciousnesses is something not entirely computable and therefor not executable on some model of a Turing machine. But then again, the probability for this I see quite negligible.
Does it matter what consciousness is made out of for mind uploading to be possible?
Of course. If some of us are right, the consciousness is an algorithm running on a substrate able to compute it.
Then, the transplantation to another substrate is sure possible. How difficult this copping actually is, I wonder.
That all, assuming no magic is involved here. No spirituality, no soul and no other holly crap.
But when we embrace the algorithmic nature of consciousness, intelligence, memories and so on, we lose the unique identifier, so dear to most otherwise rational people. Their mantra goes "You only live once!" or "Everyone is unique and unrepeatable person!". Yes, sure. So when I was born, a signal traveled across the Universe to change it from the place I could be born, to a place this possibility now expired for good? May I ask, is this signal faster then light? If it isn't ... well, it isn't good enough.
I am just an algorithm, being computed here and there, before and now.
I forgot to mention this, but I also tried my hand at writing an essay about this sort of thing: finding the physical manifestation of consciousness. If I could vouch for the rigor of it, I'd have posted it to the Facebook group already, but alas, I can't., though it may be of some use here.
Wait, so that's where the whole 'YOLO' thing/meme comes from? I notice that I am confused...
How does this square with chaos theory, which models behaviour that diverges greatly due to infinitesimal changes at the start?
What has it got to do with chaos theory?
How the identity of a single person squares with it? Wouldn't a tiny change convert me into somebody else?
What would you do on the hypothesis that this was true that you wouldn't do on the hypothesis that it was false?
Honestly? I'd start taking antidepressants, and then embark on a a life-long quest to destroy the Universe via high energy particle experiments, or perhaps an unfriendly AI.
I endorse this theory and it all adds up to normality: in the end, the theories that you offer as alternatives are all true. (I have not read anything other than your comment.)
How can they, if they're mutually exclusive?
Whew, Karma. Also, why did this get downvoted so much? I'd appreciate the skepticism a lot more in the form of an argument. (No, seriously, I'd appreciate skeptical argument way more than any abstract philosophical argument should be appreciated)
The belief that they are mutually exclusive is confusion.
I don't understand.
If there's only one person and everyone else is simulated in their minds then that simulation is powerful and uncontrollable enough that for all practical purposes they can act like there are other people.
The concept is unlike traditional solipsism, if that's what you're referring to?
I haven't read past what you posted but it seems identical to me.
This concept is unlike your example, because it is still possible for this one person carrying the simulation to create an offspring or clone, and it would in time become two separate people. Open Individualism states that if the one person carrying the simulation were to somehow reproduce themselves, there would still only be one person.
Past what I posted? Where are you?
Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.
Meta question. Is it better to correct typos and minor, verifiable factual errors (e.g. a date being a year off) in a post in the post's comment thread or a PM to the author?
I prefer PMs and do them often for both comments & posts. A minor correction is of no enduring interest and it's better if it didn't take up space publicly. (Can you imagine if every Wikipedia article could only be read as a sequence of diffs? That's what doing minor corrections in public is like.)