You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:lukeprog
30 September 2013 03:42:37AM
6 points
[-]
Much to my surprise, Richard Dawkins and Jon Stewart had a fairly reasonable conversation about existential risk on the Sept. 24, 2013 edition of The Daily Show. Here's how it went down:
STEWART: Here's my proposal... for the discussion tonight. Do you believe that the end of our civilization will be through religious strife or scientific advancement? What do you think in the long run will be more damaging to our prospects as a human race?
In reply, Dawkins says Martin Rees (of CSER) thinks humanity has a 50% chance of surviving the 21st century, and one cause for such worry is that powerful technologies could get into the hands of religious fanatics. Stewart replies:
STEWART: ...[But] isn't there a strong probability that we are not necessarily in control of the unintended consequences of our scientific advancement?... Don't you think it's even more likely that we will create something [for which] the unintended consequence... is worldwide catastrophe?
DAWKINS: That is possible. It's something we have to worry about... Science is the most powerful to do whatever you want to do. If you want to do good, it's the most powerful way to do good. If you want to do evil, it's the most powerful way to do evil.
STEWART: ...You have nuclear energy and you go this way and you can light the world, but you go this [other] way, and you can blow up the world. It seems like we always try [the blow up the world path] first.
DAWKINS: There is a suggestion that one of the reasons that we don't detect extraterrestrial civilizations is that when a civilization reaches the point where it could broadcast radio waves that we could pick up, there's only a brief window before it blows itself up... It takes many billions of years for evolution to reach the point where technology takes off, but once technology takes off, it's then an eye-blink — by the standards of geological time — before...
STEWART: ...It's very easy to look at the dark side of fundamentalism... [but] sometimes I think we have to look at the dark side of achievement... because I believe the final words that man utters on this Earth will be: "It worked!" It'll be an experiment that isn't misused, but will be a rolling catastrophe.
DAWKINS: It's a possibility, and I can't deny it. I'm more optimistic than that.
STEWART: ... [I think] curiosity killed the cat, and the cat never saw it coming... So how do we put the brakes on our ability to achieve, or our curiosity?
DAWKINS: I don't think you can ever really stop the march of science in the sense of saying "You're forbidden to exercise your natural curiosity in science." You can certainly put the brakes on certain applications. You could stop manufacturing certain weapons. You could have... international agreements not to manufacture certain types of weapons...
And then the conversation shifted back to religion. I wish Dawkins had mentioned CSER's existence.
And then later in the (extended, online-only) interview, Stewart seemed unsure as to whether consciousness persisted after one's brain rotted, and also unaware that 10^22 is a lot bigger than a billion. :(
I'm beginning to think that we shouldn't be surprised by reasonably intelligent atheists having reasonable thoughts about x-risk. Both of the two reasonably intelligent, non-LWer atheists I talked to in the past few weeks about LW issues agreed with everything I said on them and said that it all seemed sensible and non-surprising. Most LW users started out as reasonably intelligent atheists. Where did the "zomg everyone is so dumb and only LW can think" meme originate from, exactly? Is there any hard data on this topic?
Comment author:Rain
30 September 2013 11:13:00PM
*
2 points
[-]
Jon's what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they're quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He's very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.
Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it's a very time restricted format, and I've only seen him fact check on the fly once (Google, Wikipedia).
The number thing was at least partly a joke, along the lines of "bigger than 10 doesn't make much sense to me" - scope insensitivity humor. I've done similar before.
Comment author:CAE_Jones
29 September 2013 07:19:40PM
*
6 points
[-]
I'm seeing a lot of comments in which it is implicitly assumed that most everyone reading lives in a major city where transportation is trivial and there is plenty of memetic diversity. I'm wondering if this assumption is generally accurate and I'm just the odd one out, or if it's actually kinda fallacious.
Comment author:CellBioGuy
29 September 2013 11:31:37PM
1 point
[-]
A city of ~200,000 people if you include the outlying rural areas, in which you can go from the several block wide downtown to farmland in 4-5 miles in the proper directions. Fifteen minutes from another city of 60,000 which is very much a state college town. Forty minutes away from a city of nearly 500,000 people.
Granted the city of ~200,000 has a major university and a number of biotech companies.
Comment author:ChristianKl
29 September 2013 10:23:18PM
0 points
[-]
I think living in a big city is the standard that most people here consider normal. It's like living in the first world. We know that there are people from India who visit but we still see being from the first world as normal.
When you have the choice between living in a place with memetic diversity or not living in such a place the choice seems obvious.
Comment author:knb
29 September 2013 06:31:35AM
*
10 points
[-]
I've been working on a series of videos about prison reform. During my reading, I came across an interesting passage from wikipedia:
In colonial America, punishments were severe. The Massachusetts assembly in 1736 ordered that a thief, on first conviction, be fined or whipped. The second time he was to pay treble damages, sit for an hour upon the gallows platform with a noose around his neck and then be carted to the whipping post for thirty stripes. For the third offense he was to be hanged.[4] But the implementation was haphazard as there was no effective police system and judges wouldn't convict if they believed the punishment was excessive. The local jails mainly held men awaiting trial or punishment and those in debt.
What struck me was how preferable these punishments (except the hanging, but that was very rare) seem compared to the current system of massive scale long-term imprisonment. I would much rather pay damages and be whipped than serve months or years in jail. Oddly, most people seem to agree with Wikipedia that whipping is more "severe" than imprisonment of several months or years (and of course, many prisoners will be beaten or raped in prison). Yet I think if you gave people being convicted for theft a choice, most of them would choose the physical punishment instead of jail time.
Comment author:knb
15 October 2013 08:44:22PM
1 point
[-]
Isn't freedom important for human dignity? It seems that any kind of punishment infringes on human dignity to some extent. Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards--something which is widely known.
Comment author:ChristianKl
15 October 2013 09:48:20PM
*
-2 points
[-]
Isn't freedom important for human dignity?
According to the standard moral doctrine it's not as central as bodily integrity. The state is allowed to take away freedom of movement but not bodily integrity or force people to work as slaves.
Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards--something which is widely known.
That's a feature of the particular way a prison is run.
Comment author:Lumifer
01 October 2013 04:31:11PM
3 points
[-]
I would much rather...
Don't look at it from the perp point of view, look at it from an average-middle-class-dude or a suburban-soccer-mom point of view.
If there's a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don't know and in the meantime he's still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
That is the major advantage of prisons over fines and/or physical punishments.
Comment author:knb
02 October 2013 09:25:46AM
*
2 points
[-]
If there's a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don't know and in the meantime he's still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
This is totally obvious, I'm not sure why you felt you needed to point that out.
The point of my comment is that it is interesting that prison isn't viewed as cruel, even though it's obviously more harsh than alternatives. Obviously there are other reasons people prefer prison as a punishment for others.
Comment author:[deleted]
02 October 2013 01:59:56AM
2 points
[-]
That's only an advantage if the expected cost to society of keeping him in prison is less than the expected cost (broadly construed) to society of him keeping on robbing.
Comment author:Desrtopa
01 October 2013 04:58:52PM
*
3 points
[-]
On the other hand, making people spend long periods of time in a low-trust environment surrounded by criminals seems to be a rather effective way of elevating recidivism when they do get out, so the advantage as implemented in our system is on rather tenuous footing.
And of course, the prison system comes with the major disadvantage that imprisoning people is a highly expensive punishment to implement.
Comment author:Lumifer
01 October 2013 05:21:50PM
2 points
[-]
I am not arguing that prisons are the proper way to deal with crime. All I'm saying is that arguments in favor of imprisonment as the preferred method of punishing criminals exist.
Comment author:TheOtherDave
29 September 2013 04:21:07PM
7 points
[-]
I'm reminded of the perennial objections to Torture vs Dust Specks to the effect that torture is a sacred anti-value which simply cannot be evaluated on the same axis as non-torture punishments (such as jail time, presumably), regardless of the severities involved..
Comment author:roystgnr
30 September 2013 05:10:56PM
4 points
[-]
The key quote, "Incarceration destroys families and jobs, exactly what people need to have in order to stay away from crime." If we had wanted to create a permanent underclass, replacing corporal punishment with prison would have been an obvious step in the process.
Obviously that's not why people find imprisonment so preferable to torture, though; TheOtherDave's "sacred anti-value" explanation is correct there. It would be interesting to know exactly how a once-common punishment became seen as unambiguously evil, though, in the face of "tough on crime" posturing, lengthening prison sentences, etc.
Comment author:Moss_Piglet
01 October 2013 02:46:34PM
0 points
[-]
Because corporal punishment is an ancient display of power; the master holding the whip and the servant being punished for misbehavior. It's obviously effective, and undoubtedly more humane than incarceration, but it's also anathema to the morality of the "free society" where everyone is supposed to be equal and thus no-one can hold the whip.
(Heck, even disciplining a child is considered grounds to put the kid in foster care; if you want corporal punishment v incarceration, that's a hell of a dichotomy. And for every genuinely abused kid CPS saves, how many healthy families get broken up again?)
The idea is childish and unrealistic, but nonetheless popular because it plays on the fear and resentment people feel towards those above them. And in a democracy, popular sentiment is difficult to defeat.
Comment author:Viliam_Bur
01 October 2013 10:04:50AM
5 points
[-]
Maybe it's a part of human hypocrisy: we want to punish people, but in a way that doesn't make our mirror neurons feel their pain. We want people to be punished, without thinking about ourselves as the kind of people who want to harm others. We want to make it as impersonal as possible.
So we invent punishments that don't feel like we are doing something horrible, and yet are bad enough that we would want to avoid them. Being locked behind bars for 20 years is horrible, but there is no speficic moment that would make an external observer scream.
Comment author:TheOtherDave
30 September 2013 05:50:24PM
0 points
[-]
It is, incidentally, not obvious to everyone that the desire to create a stable underclass didn't drive our play a significant role in our changing attitudes towards prisons... in fact, it's not even obvious to me, though I agree that they didn't play a significant role in our changing attitudes towards torturing criminals.
Why is "downvoted" so frequently modified by "to oblivion"? Can we please come up with a new modifier here? This is already a dead phrase, a cliche which seems to get typed without any actual thought going into it. Wouldn't downvoting "to invisibility" or "below the threshold" or even just plain "downvoting", no modifier, make a nice change?
Comment author:drethelin
27 September 2013 09:10:04PM
5 points
[-]
Slang vocabulary tends to become more consistent and repetitive over time in my experience. New phrases will appear and then go to fixation until everyone uses them. The only answer is to try to be as creative as possible in your own word choices.
The Relationship Escalator-- an overview of assumptions about relationships, and exceptions to the assumptions. The part that surprised me was the bit about the possibility of dialing back a relationship without ending it.
I ate something I shouldn't have the other day and ended up having this surreal dream where Mencius Moldbug had gotten tired of the state of the software industry and the Internet and had made his personal solution to it all into an actual piece of working software that was some sort of bizarre synthesis of a peer-to-peer identity and distributed computing platform, an operating system and a programming language. Unfortunately, you needed to figure out an insane system of phoneticized punctuation that got rewritten into a combinator grammar VM code if you wanted to program anything in it. I think there even was a public Github with reams of code in it, but when I tried to read it I realized that my computer was actually a cardboard box with an endless swarm of spiders crawling out of it while all my teeth were falling out, and then I woke up without ever finding out exactly how the thing was supposed to work.
One of Urbit’s problems is that we don’t exactly have a word for what Urbit is. If there is such a word, it somehow means both “operating system” and “network protocol,” while somehow also implying “functional” and “deterministic.”
Not only is there no such word, it’s not even clear there should be one. And if there was, could we even hear it? As Wittgenstein said: if a lion could talk, we would not understand him. But heck, let’s try anyway.
Comment author:David_Gerard
08 November 2013 05:46:42PM
*
1 point
[-]
For an example of fully rampant Typical Mind Fallacy in Urbit, see the security document. About two-thirds of the way down, you can actually see Yarvin transform into Moldbug and start pontificating on how humans communicating on a network should work, and never mind the observable evidence of how they actually have behaved whenever each of the conditions he describes have obtained.
The very first thing people will do with the Urbit system is try to mess with its assumptions, in ways that its creators literally could not foresee (due to Typical Mind Fallacy), though they might have been reasonably expected to (given the real world as data).
Comment author:niceguyanon
26 September 2013 04:31:25PM
*
8 points
[-]
Video playback speed was mentioned on the useful habits repository thread a few weeks ago and I asked how I could do the same. Youtube's playback speed option is not available on all videos. Macs apparently have a plug-in you can download, I don't own a mac so that's not helpful. You could download the video then play it back, but that wastes time. I just learned a solution that works across all OS' with out the need to download the video first.
Comment author:networked
26 September 2013 12:34:51PM
*
8 points
[-]
Less Wrong and its comments are a treasure trove of ethical problems, both theoretical and practical, and possible solutions to them (the largest one to my knowledge; do let me know if you are aware of a larger forum for this topic). However, this knowledge is not easy to navigate, especially to an outsider who might have a practical interest in it. I think this is a problem worth solving and one possible solution I came up with is to create a StackExchange-style service for (utilitarian, rationalist) ethics. Would you consider such a platform for ethical questions to be useful? Would you participate?
Possible benefits:
Making existing problems and their answers easier to navigate through the use of tagging and a stricter question-answer format.
Comment author:Viliam_Bur
25 September 2013 12:23:24PM
*
8 points
[-]
Anyone here familiar enough with General Semantics and willing to write an article about it? Preferably not just a few slogans, but also some examples of how to use it in real life.
I have heard it mentioned a few times, and it sounds to me a bit LessWrongish, but I admit I am too lazy now to read a whole book about it (and I heard that Korzybski is difficult to read, which also does not encourage me).
Comment author:ChristianKl
01 October 2013 12:04:04AM
0 points
[-]
I just started rereading Science and Sanity and maybe the project will develop into a lesswrong post.
When it comes to Korzybski being difficult to read I think it's because the idea he advocates are complex.
As he writes himself:
For those other readers who insist on translating the new terms with new structural implications into their old habitual language, and choose to retain the old terms with old structural implications and old semanatic relations this work will not appear simple.
It's a bit like learning a foreign language in a foreign language. In some sense that seems necessary.
A lot of dumb down elements of General Semantics made it into popular culture but the core seems to be intrinsicly hard.
Comment author:RomeoStevens
26 September 2013 01:12:16AM
*
0 points
[-]
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics. Recent books on the subject are well reviewed. It is a useful tool in communicating across large value rifts.
Comment author:fubarobfusco
27 September 2013 10:43:30PM
0 points
[-]
Does Rosenberg cite Bourland (or Korzybski) anywhere? I thought these were independent inventions that happened upon some tangential ideas about non-judgmental thinking.
Comment author:RomeoStevens
28 September 2013 01:26:15AM
0 points
[-]
I had thought that there was a link in someone Rosenberg worked with developing it but now I can't find anything. The elimination of the "to-be" verb forms does not seem explicit in NVC methodology. I think you are correct and they are independent.
Comment author:ChristianKl
26 September 2013 01:44:38PM
*
2 points
[-]
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics.
I don't think it makes sense to speak of a single framework as the heir of General Semantics. General Semantics influenced quite a lot.
General Semantics itself is quite complex. Nonviolent communication is pretty useless when you want to speak about scientific knowledge. General Semantics notions of thinking about relations and structure are on the other hand are quite useful.
A personal anecdote I'd like to share which relates to the recent polyphasic sleep post ( http://lesswrong.com/lw/ip6/polyphasic_sleep_seed_study_reprise/ ):
My 7 year old son who always tended to sleep long and late seems to have developed segmented sleep by himself in the last two weeks.
He claims to wake e.g. at 3:10 AM gets dressed, butters his school bread - and gets to bed again - in our family bed. It's no joke. He lies dressed in bed and his satchel is packed. And the interesting thing is: He is more alert and less bad tempered than before. He doesn't do afternoon naps though - at least none that I know of.
What can have caused this? Maybe the seed was that our children were always allowed to come into the family bed in the night (but only in the night) which they did often.
Comment author:Viliam_Bur
25 September 2013 10:41:11AM
*
1 point
[-]
I remember reading somewhere (sorry, no link) that waking up at the midnight, and then going to sleep again after an hour or so, was considered normal a few hundred years ago. Now this habit is gone, probably because we make the night shorter using artificial lights.
Yes. I know. See e.g. http://en.wikipedia.org/wiki/Segmented_sleep
I knew that beforehand. That was the reason I wasn't worried when my children woke up at night and crawled into our family bed (some other parents seem to worry.about the quality of their childrens sleep).
But I'm surprised that he actually segmented and that it went this far. I understood that artificial lighting - and we have enough of that - suppresses this segmentation.
Comment author:Viliam_Bur
25 September 2013 11:05:27AM
0 points
[-]
I understood that artificial lighting - and we have enough of that - suppresses this segmentation.
Perhaps it is not the light per se, but the fact that when you stay awake at evening, and wake up on alarm clock in the morning, the body learns to give up the segmented sleep to protect itself from the sleep deprivation. Maybe the time interval for your children between going to sleep and having to wake up is large enough.
Possibly. But he has been a late riser always and he doesn't really go to sleep earler than before. In fact he get earler than before. But maybe his sleep pattern just changes due to normal development.
My older son (9 years) also sometimes gets up in the night to visit the family bed. But I guess he is not awake long. He likes to build things and read or watch movies (from our file server) until quite late in the evening (often 10 PM). We allow that because he has no trouble getting up early.
Comment author:niceguyanon
25 September 2013 04:44:32AM
*
2 points
[-]
Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson's post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I'm sure to a certain degree it is, how much? I don't know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
Comment author:niceguyanon
25 September 2013 01:36:03PM
1 point
[-]
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I'd label it a useful heuristic and if it is often wrong I would label it a bias.
You're not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can't be a bias, and a bias can't be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
Comment author:Viliam_Bur
25 September 2013 09:23:38AM
1 point
[-]
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to "show that we care"?
Because there are many charities (at least in my country) focusing on providing children things they don't really need; such as donating boring used books to children in orphanages. Obviously, "giving to children in orphanages" is a touching signal of caring, and most people don't realize that those children already have more books than they can read (and they usually don't wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don't care about the details, not even enough to learn those details.
Comment author:Eugine_Nier
25 September 2013 07:57:45AM
0 points
[-]
This depends on what you mean by "care", i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don't care in the sense that they aren't interested in how much said actions actually help children (or whether they help them at all).
Comment author:Viliam_Bur
26 September 2013 06:55:45AM
*
5 points
[-]
Well, because that's in near mode.
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
Comment author:Viliam_Bur
27 September 2013 08:08:15AM
4 points
[-]
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you -- they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action -- they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker... or perhaps even significantly stronger... in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some "tribe", the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly -- for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using "do they reward me for helping myself?" as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory... a group that is still in the "conquering the world" mode, not in the "sharing the spoils" mode. (Be careful when the group reaches some victories.)
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn't reciprocate, but third parties were more likely to help him.
Comment author:Viliam_Bur
27 September 2013 11:07:42AM
*
2 points
[-]
Then I guess people from CFAR should go to some universities and give lectures about... effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with "helping others", even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn't recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Comment author:drethelin
26 September 2013 07:44:37PM
0 points
[-]
Because it's considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
Comment author:curiousepic
25 September 2013 12:30:14AM
5 points
[-]
It seems to be pretty well decided that (as opposed to directly promoting Less Wrong, or Rationality in general), spreading HPMoR is a generally good idea. What are the best ways to go about this, and has anyone undertaken a serious effort?
I came to the conclusion, after considering creating some flyers to post around our meetup's usual haunts, that online advocacy would be much more efficient and cost effective. Then, after thinking that promotion on large sites with high signal to noise is mostly useless, realized that sharing among smaller communities that you are already a part of (game/specific interest forums, Facebook groups, etc.) might increase likelihood of a clickthrough, due to an even modest amount of social clout and in-group effect (as opposed to creating an account just to spam). And, posting (and bumping) is a very trivial inconvenience - but if you are still held back by the effort of creating a blurb, I'm happy to provide the one I used.
Comment author:Coscott
30 September 2013 12:14:24AM
0 points
[-]
Convince me of this claim that you think is well decided.
I am not convinced that from the viewpoint of a non-rationalist that HPMoR doesn't have many of the same problems as Spock. I can see many people reading the book, feeling that HP is too "evil," and deciding that "rationality" is not for them.
Also, EY said "Authors of unfinished stories cannot defend themselves in the possible worlds where your accusation is unfair." This should swing both ways. If it turns out that HP goes crazy because he was being meta and talking to himself too much, then spreading HPMoR is probably not as good an idea.
Comment author:gwern
26 September 2013 02:56:57PM
3 points
[-]
Of course, you should only do this where the forum has made the foolish choice to allow signatures. (One of the things I appreciate about Reddit/LW compared to forums is how they strongly discourage signatures.)
There's an annoying assumption that no parent would want their child to have a greatly extended lifespan, but I think it's a reasonable overview otherwise, or at least I agree that there's not going to be a major increase in longevity without a breakthrough. Lifestyle changes won't do it.
Comment author:Coscott
24 September 2013 11:57:11PM
*
6 points
[-]
Poll Question: What are communities are you active in other than Less Wrong?
Communities that you think are closely related to Less Wrong are welcome, but I am also wondering what other completely unrelated groups you associate with. How do you think such communities help you? Are there any that you would recommend to an arbitrary Less Wronger?
Comment author:Username
27 September 2013 05:48:56PM
1 point
[-]
Orthogonal to LW, I'm very active in my university's Greek community, serving as VP of a fraternity. It's been excellent social training and I've had a very positive experience.
Comment author:beoShaffer
26 September 2013 04:42:11AM
2 points
[-]
I'm active in Toastmasters and martial arts (mostly the community of my specify school). Overall Toastmasters seems pretty effective at its stated goals of improving public speaking and leadership skills. Its also fun (at least for me). Additionally, both force me to actually interact with other people, which is nice and not something that the rest of my live provides.
Comment author:Coscott
25 September 2013 08:43:00PM
3 points
[-]
The only two communities I am currently active in right now (other than career/family communities) are Less Wrong and Unitarian Universalism.
In the past had a D&D group that I participated very actively in. I think that the people I played D&D with in high school had a very big and positive effect on my development.
I think that I would like to and am likely to develop a local community of people to play strategy board games in the future.
Comment author:blacktrance
25 September 2013 08:09:16PM
2 points
[-]
I'm active in (though not really a member of) the "left-libertarian" community, associated with places like Center for a Stateless Society (though I myself am not an anarchist) and Bleeding Heart Libertarians. I'm also a frequent reader and occasional commenter on EconLog.
Less related, I'm an active poster on GameFAQs and on a message board centered around the Heroes of Might and Magic game series.
Comment author:Coscott
25 September 2013 08:37:20PM
0 points
[-]
I also used to be active on GameFAQs. For about a year in 2004 it was most of my internet activity, specifically the Pikmin boards. That was a long time ago though when I was a high school freshman.
Comment author:LM7805
25 September 2013 05:00:08PM
7 points
[-]
My local hackerspace, and broadly the US and European hacker communities. This is mainly because information security is my primary focus, but I find myself happier interacting with hackers because in general they tend not only to be highly outcome-oriented (i.e., inherently consequentialist), but also pragmatic about it: as the saying goes, there's no arguing with a root shell. (Modulo bikeshedding, but this seems to be more of a failure mode of subgroups that don't strive to avoid that problem.) The hacker community is also where I learned to think of communities in terms of design patterns; it's one of the few groups I've encountered so far that puts effort into that sort of community self-evaluation. Mostly it helps me because it's a place where I feel welcome, where other people see value in the goals I want to achieve and are working toward compatible goals. I'd encourage any instrumental rationalist with an interest in software engineering, and especially security, to visit a hackerspace or attend a hacker conference.
Until recently I was also involved in the "liberation technology" activism community, but ultimately found it toxic and left. I'm still too close to that situation to evaluate it fairly, but a lot of the toxicity had to do with identity politics and status games getting in the way of accomplishing anything of lasting value. (I'm also dissatisfied with the degree to which activism in general fixates on removing existing structures rather than replacing them with better ones, but again, too close to evaluate fairly.)
Comment author:maia
25 September 2013 05:59:30AM
9 points
[-]
Contra dance. Closely correlated with LessWrong; also correlated with nerdy people in general. I would recommend it to most LessWrongers; it's good even for people who are not generally good at dancing, or who have problems interacting socially. (Perhaps even especially for those people; I think of it as a 'gateway dance.')
Other types of dance, like swing dance. Also some correlation with LessWrong, somewhat recommended but this depends more on your tastes. Generally has a higher barrier to entry than contra dancing.
Comment author:drethelin
25 September 2013 06:58:29PM
1 point
[-]
I'm going to second Contra Dance. It's really fun and easy to start while having a decent learning curve such that you don't hit a skill ceiling fast. Plus you meet lots of people and interact with them in a controlled, friendly, cooperative fun fashion.
Comment author:JQuinton
24 September 2013 09:32:53PM
14 points
[-]
Is there a name for this following bias?
So I've debated a lot of religious people in my youth, and a common sort of "inferential drift", if you can call if that, is that they believe that if you don't think something is true or doesn't exist, then this must mean that you don't want said thing to be true or to exist. It's like a sort of meta-motivated reasoning; they are falsely attributing your conclusions due to motivated reasoning. The most obvious examples are reading any sort of Creationist writing that critiques evolution, where they pretty explicitly attribute accepting the theory of evolution to a desire for god to not exist.
I've started to notice it in many other highly charged, mind-killing topics as well. Is this all in my head? Has anyone else experienced this?
Comment author:JQuinton
25 September 2013 05:15:17PM
-1 points
[-]
That does seem close to Bulverism. But what I described seem to be happening at a subconscious bias level, where people are somewhat talking past each other due to a sort of hidden assumption of Bulverism.
No, that is a mere assertion (which may or may not be true). If they claimed that he is wrong because he is engaging in motivated reasoning, then that would be ad hominem.
Comment author:blashimov
26 September 2013 02:56:59PM
0 points
[-]
Wait, what? This might be a little off topic, but if you assert that they lack evidence and are drawing conclusions based on motivated reasoning, that seems highly relevant and not ad hominem. I guess it could be unnecessary, as you might try to focus exactly on their evidence, but it would seem reasonable to look at the evidence they present, and say "this is consistent with motivated reasoning, for example you describe many things that would happen by chance but nothing similar contradictory, so there seems to be some confirmation bias" etc.
Comment author:Moss_Piglet
24 September 2013 10:29:16PM
5 points
[-]
I used to get a lot of people telling me I was an atheist because I either didn't want there to be a god or because I wanted the universe to be logical (granted, I do want that, but they meant it in the pejorative Vulcan-y sense). I eventually shut them up with "who doesn't want to believe they're going to heaven?" but it took me a while to come up with that one.
I don't understand it either, but this is a thing people say a lot.
I'm back in school studying computer science (with a concentration in software engineering), but plan on being a competent programmer by the time I graduate, so I figure I need to learn lots of secondary and tertiary skills in addition to those that are actually part of the coursework. In parallel to my class subjects, I plan on learning HTML/CSS, SQL, Linux, and Git. What else should be on this list?
Preliminaries: Make sure you can touch type, being able to hit 50+ wpm without sweat makes it a lot easier to whip up a quick single-screen test program to check up something. Learn a text editor with good macro capabilities, like Vim or Emacs, so you can do repetitive structural editing of text files without having to do every step by hand. Get into the general habit of thinking that whenever you find yourself doing several repetitive steps by hand, something is wrong and you should look into ways to automate the loop.
Working with large, established code bases, like Vladimir_Nesov suggested, is what you'll probably end up doing a lot as a working programmer. Better get used to it. There are many big open-source projects you can try to contribute to.
Unit tests, test-driven development. You want the computer to test as much of the program as possible. Also look into the major unit testing frameworks for whatever language you're working on.
Build systems, rigging up a complex project to build with a single command line command. Also look into build servers, nightly builds and the works. A real-world software project will want a server that automatically builds the latest version of the software every night and makes noise to the people responsible if it won't build, or if an unit test fails.
Oh, and you'll want to know a proper command line for that. So when learning Linux, try to do your stuff in the command line instead of sticking to the GUI. Figure out where the plaintext configuration files driving whatever programs you use live and how to edit them. Become suspicious about software that doesn't provide plaintext config files. Learn about shell scripting and onliners, and what the big deal in Unix about piping output from one program to the next is.
Git is awesome. After you've figured out how to use it on your own projects, look into how teams use it. Know what people are talking about when they talk about a Git workflow. Maybe check out Gerrit for a collaborative environment for developing with Git. Also check out how bug tracking systems and how those can tie into the version control.
Know some full stack of web development. If you want a web domain running a neat webapp, how would you go about getting the domain, arranging for the hosting, installing the necessary software on the computer, setting up the web framework and generating the pages that do the neat thing? Can you do this by rolling your own minimal web server instead of Apache and your own minimal web framework instead of whatever out of the box solution you'd use? Then learn a bit about the out of the box web server and web framework solutions.
Have a basic idea about the JavaScript ecosystem for frontend web development.
Look into cloud computing. It's new enough not to have made it into many curricula yet. It's probably not going to go away anytime soon. How would you use it, why would you want to use it, when would you not want to use it? Find out why map-reduce is cool.
Learn how the Internet works. Learn why people say that the Internet was made by pros and the web was made by amateurs. Learn how to answer the interview question "What happens between typing an URL in the address field and the web page showing up in the browser" in as much detail as you can.
Look into the low-level stuff. Learn some assembly. Figure out why Forth is cool by working through the JonesForth tutorial. Get an idea how computers work below the OS level. The Elements of Computing Systems describes this for a toy computer. Read up on how people programmed a Commodore 64, it's a lot easier to understand than a modern PC.
Learn about the difference between userland and kernel space in Linux, and how programs written (in assembly) right on top of the kernel work. See how the kernel is put together. See if you can find something interesting to develop in the kernel-side code.
Learn out how to answer the interview question "What happens between pressing a key on the keyboard and a letter showing up on the monitor" in as much detail as you can.
Write a simple ray-tracer and a simple graphics program that does something neat with modern OpenGL and shaders. If you want to get really crazy with this, try writing a demoscene demo with lots of graphical effects and a synthesized techno soundtrack. If you want even crazier, try to make it a 4k intro.
Come up with a toy programming language and write a compiler for it.
Write a toy operating system. Figure out how to make a thing that makes a PC boot off the bare iron, prints "Hello world" on the screen and doesn't do anything beyond that. Then see how far you can get in making the thing do other things.
Comment author:Viliam_Bur
01 October 2013 10:12:35AM
0 points
[-]
not having to pay attention to the keyboard, your fingers should know what do without taking up mindspace
Yes, this is a critical skill. Especially when someone is learning programming, it is so sad to see their thinking interrupted all the time by things like: "when do I find the '&' key on my keyboard?", and when the key is finally found, they already forgot what they wanted to write.
your typing being able to keep up with your thinking
This part is already helped by many development environments, where you just write a few symbols and press Ctrl+space or something, and it completes the phrase. But this helps only with long words, not with symbols.
Comment author:gwern
30 September 2013 09:20:40PM
*
5 points
[-]
It's not the top speed, it's the overhead. It is incredibly irritating to type slowly or make typos when you're working with a REPL or shell and are tweaking and retrying multiple times: you want to be thinking about your code and all the tiny niggling details, and not about your typing or typos.
Comment author:sketerpot
24 September 2013 11:16:30PM
*
6 points
[-]
It's a good start, but I notice a lack of actual programming languages on that list. This is a very common mistake. A typical CS degree will try to make sure that you have at least basic familiarity with one language, usually Java, and will maybe touch a bit on a few others. You will gain some superpowers if you become familiar with all or most of the following:
A decent scripting language, like Python or Ruby. The usual recommendation is Python, since it has good learning materials and an easy learning curve, and it's becoming increasingly useful for scientific computing.
A lisp. Reading Structure and Interpretation of Computer Programs will teach you this, and a dizzying variety of other things. It may also help you achieve enlightenment, which is nice. Seriously, read this book.
Something low-level, usually C.
Something super-low-level: an assembly language. You don't have to be good at writing in it, but you should have basic familiarity with the concepts. Fun fact: if you know C, you can get the compiler to show you the corresponding assembly.
You should take the time to go above-and-beyond in studying data structures, since it's a really vital subject and most CS graduates' intuitive understanding of it is inadequate. Reading through an algorithms textbook in earnest is a good way to do this, and the wikipedia pages are almost all surprisingly good.
When you're learning git, get a GitHub account, and use it for hosting miscellaneous projects. Class projects, side projects, whatever; this will make acquiring git experience easier and more natural.
I'm sure there's more good advice to give, but none of it is coming to mind right now. Good luck!
Sorry if I wasn't clear. I intended the list to include only skills that make you a more valuable programmer that aren't explicitly taught as part of the degree. Two Java courses (one object-oriented) are required as is a Programming Languages class that teaches (at least the basics of) C/C++, Scheme, and Prolog. Also, we must take a Computer Organization course that includes Assembly (although, I'm not sure what kind). Thanks for the advice.
Comment author:LM7805
25 September 2013 05:12:41PM
0 points
[-]
I've TAed a class like the Programming Languages class you described. It was half Haskell, half Prolog. By the end of the semester, most of my students were functionally literate in both languages, but I did not get the impression that the students I later encountered in other classes had internalized the functional or logical/declarative paradigms particularly well -- e.g., I would expect most of them to struggle with Clojure. I'd strongly recommend following up on that class with SICP, as sketerpot suggested, and maybe broadening your experience with Prolog. In a decade of professional software engineering I've only run into a handful of situations where logic programming was the best tool for the job, but knowing how to work in that paradigm made a huge difference, and it's getting more common.
Comment author:Viliam_Bur
25 September 2013 09:06:34AM
*
1 point
[-]
In school you are typically taught making small projects. Make a small algorithm, or a small demonstration that you can display an information in an interactive user interface.
In real life (at least in my experience), the applications are typically big. Not too deep, but very wide. You don't need complex algorithms; you just have dozens of dialogs, hundreds of variables and input boxes, and must create some structure to prevent all this falling apart (especially when the requirements keep changing while you code). Also you have a lot of supporting functionality in a project (for example: database connection, locking, transactions, user authentification, user roles and permissions, printing, backup, export to pdf, import from excel, etc.). Again, unless you have structure, it falls apart. And you must take good care of many things that may go wrong (such as: if the user's web browser crashes, so the user cannot explicitly log out of the system, the edited item should not remain locked forever).
To be efficient at this, you also need to know some tools for managing projects. Some of those tools are Java-specific, so your knowledge of Java should include them; they are parts of the Java ecosystem. You should use javadoc syntax to write comments; JUnit to write unit tests; Maven to create and manage projects, some tools to check your code quality, and perhaps even Jenkins for continuous integration. Also the things you already have on your list (HTML, CSS, SQL, git) will be needed.
To understand creating web applications in Java, you should be able to write your own servlet, and perhaps even write your own JSP tag. Then all the frameworks are essentially libraries built on this, so you will be able to learn them as needed.
As an exercise, you could try to write a LessWrong-like forum in Java (with all its functionality; of course use third-party libraries where possible); with javadoc and unit tests. If you can do that, you are 100% ready for the industry (the next important skill you will need is leading a team of people who don't have all of these skills yet, and then you are ready for the senior position). But that can take a few months of work.
There is another aspect of working on big projects that seems equally important. What you are talking about I'd call "design", the skill of organizing the code (and more generally, the development process) so that it remains intelligible and easy to teach new tricks as the project grows. It's the kind of thing reading SICP and writing big things from scratch would teach.
The other skill is "integration", ability to open up an unfamiliar project that's too big to understand well in a reasonable time, and figure out enough about it to change what you need, in a way that fits well into the existing system. This requires careful observation, acting against your habits, to conform to local customs, and calibration of the sense of how well you understand something, so that you can judge when you've learned just enough to do your thing right, but no less and not much more. Other than on a job, this could be learned by working a bit (not too much on each one, lest you become comfortable) on medium/large open source projects (implementing new features, not just fixing trivial bugs), possibly discarding the results of the first few exercises.
Comment author:Error
24 September 2013 06:22:29PM
2 points
[-]
I am wondering what a PD tournament would look like if the goal was to maximize the score of the group rather than the individual player. For some payoff matrices, always cooperate trivially wins, but what if C/D provides a greater net payoff than C/C, which in turn is higher than D/D? Does that just devolve to the individual game? It feels like it should, but it also feels like giving both players the same goal ought to fundamentally change the game.
I haven't worked out the math; the thought just struck me while reading other posts.
what if C/D provides a greater net payoff than C/C
The Prisoner's Dilemma is technically defined as requiring that this not be the case, precisely so that one doesn't ahve to consider the case (in iterated games) where the players agree to take turns cooperating and defecting. You are considering a related but not identical game. Which is of course totally fine, just saying.
If you allow C/D to have a higher total than CC, then it seems there is a meta-game in coordinating the taking-turns - "cooperating" in the meta-game takes the form of defecting only when it's your turn. Then, the players maximise both their individual scores and the group score by meta-cooperating.
Comment author:Coscott
24 September 2013 06:56:14PM
3 points
[-]
The game you are talking about should not be called PD.
The solution will be for everyone to pick randomly, (weighted based on the difference in C/C and D/D payoff) until they get a C/D outcome, and then just picking the same thing over and over. (This isn't a unique solution, but it seems like a Schelling point to me.)
Comment author:Coscott
24 September 2013 06:19:30PM
6 points
[-]
I was wondering if anyone had any opinions/observations they would be would be willing to share about Unitarian Universalism. My fiancee is an atheist and a Unitarian Universalist, and I have been going to congregation with her for the last 10 months. I enjoy the experience. It is relaxing for me, and a source of interesting discussions. However, I am trying to decide if my morality has a problem with allying myself this community. I am leaning towards no. I feel like they are doing a lot of good by providing a stepping stone out of traditional religion for many people. I am however slightly concerned about what effect this community might have on my future children. I would love to debate this issue with anyone who is willing, and I think that would be very helpful for me.
Comment author:shminux
24 September 2013 10:01:42PM
1 point
[-]
The UU "Seven Principles and Purposes" seem like a piece of virtue ethics. If you don't mind this particular brand of it, then why not.
From Wikipedia:
"We come from One origin, we are headed to One destiny, but we cannot know completely what these are, so we are to focus on making this life better for all of us, and we use reason when we can, to find our way. "
If you discard the ornamental fluff in this "philosophy" and "focus on making this life better for all of us", then it's as good a guideline as any.
Comment author:Coscott
24 September 2013 10:37:43PM
2 points
[-]
As I said in responding to another comment, this is the part of UU that I relate to. However, the problem is that while UUs might be slightly above average rationality, "we can use reason when we can" means that beliefs come from thinking for yourself as opposed to reading e.g. the bible, and the stuff they come up with by thinking for themselves is usually not all that great by my standards. I am worried that I am giving UU too much credit because they happen to use the word "reason," when in reality they mean something very different than what I mean.
Comment author:Viliam_Bur
25 September 2013 08:12:23AM
*
5 points
[-]
the stuff they come up with by thinking for themselves is usually not all that great by my standards
They are just humans, aren't they? I am afraid that at this moment it is impossible to assemble a large group of people who would all think on LW-level. Not including obvious bullshit, or at least not making it a core of group beliefs, is already a pretty decent result for a large group of humans.
Perhaps one day CFAR will make a curricullum that can replicate rationality quickly (at least on suitable individuals) and then we can try to expand rationality to mass level. Until then, having a group without obviously insane people in power is probably the best you can get.
I am worried that I am giving UU too much credit because they happen to use the word "reason,"
You already reflected on this, so just: don't emotionally expect what is not realistic. They are never going to use reason as you define it. But the good news is that they will not punish you for using reason. Which is the best you can expect from a religious group.
Comment author:Viliam_Bur
25 September 2013 10:59:57AM
*
2 points
[-]
You inspired me to google whether there are UU in Slovakia. None found, although there are some in the neighbor countries: Czech, Hungary.
I wonder whether it would be possible to create a local branch here, to draw people, who just want to feel something religious but don't want to belong to a strict organization, away from Catholicism (which in my opinion has huge negative impacts on the country). There seem to be enough such people here, but they are not organized, so they usually stay within the churches of their parents.
The problem is, I am not the right person to start something like this, because I don't feel any religious need; for me the UU would be completely boring and useless. I am not sure if I could pretend interest at least for long enough to collect a group of people, make them interested in the idea, put them into contact with neighbor UUs, and then silently sneak away. ;-)
Also, I suspect the religion is not about ideas, but about organized community. (For example, the only reason you are interested in UU is because your fiancee is. And your fiancee probably has similar reasons, etc.) Starting a new religious community where no support exists, would need a few people willing to sacrifice a lot of time and work -- in other words, true believers. Later, when the community exists, further recruitment should be easier.
Well, at least this is the first social engineering project I feel I could have higher than 1% chance of doing successfully, if I decided to. (Level 3 of Yudkowsky Ambition Scale in a local scope?)
Comment author:Coscott
25 September 2013 06:10:11PM
*
1 point
[-]
Here are some things you should know:
Unitarian Universalism is different from Unitarianism. UU is basically a spin-off of Unitarianism from when they combined with Universalism in 1961 in North America. As a result, there are very few UU churches outside of NA.
Unitarianism is on average more Christian than UU, and there exist some UU congregations that also have a Christian slant. (The one I was talking about is not one of them) I have also heard that some UU churches are considerably more tolerant of everything other than Christianity than they are of Christianity. (Probably because their members were escaping Christianity) The views change from congregation to congregation because they are decided from the bottom up from the local congregants.
The UUA has free resources, such as transcribed sermons you could read, for people who wanted to start a congregation.
I think I gain some stuff from it that is not directly from my fiancee. I don't know if it is enough to continue going on my own. It is a community that roughly follows strategy 1 of the belief signalling trilemma, which I think is nice to be in some of the time. The sermons are usually way too vague, but have produced interesting thoughts when I added details to them on my own and then analyzed my version. There is also (respectful) debating, which I think I find fun regardless of who I am debating with. I like how it enables people to share significant highs or lows in their life, so the community can help them. There are pot-lucks and game nights, and courses on philosophy and religions. There is also singing, which I am not so crazy about, but my fiancee loves.
Comment author:Coscott
24 September 2013 11:05:23PM
1 point
[-]
They are reaching many of the wrong conclusions. I think this might be because their definition of "use reason" is just to think about their beliefs, which is not enough. When I say "use reason," I mean thinking about my beliefs in a specific way. That specific way is something that I think a lot of us have roughly in common on less wrong, and it would take to long to describe all the parts of it now. To point out a specific example, one UU said to me "There are some mysteries we can never get answers to, like what happens when we die," and then later "I am a firm believer in reincarnation, because I have had experiences where I felt my past lives." I never questioned to her that she had those experiences, and argued a bit and was able to get her to change her first statement, because reincarnation experiences were evidence against it, which I thought was an improvement. However, not noticing how contradictory these beliefs were is not something I would call "reason."
Perhaps what is bothering me is a difference in cognitive ability, and UUs version of "reason" is as much as I can expect from the average person. Or, perhaps these are people who are genuinely interested in being rational, and would be very supportive of learning how, but have not yet learned. It could also be that they just want to say that they are using "reason."
Comment author:Coscott
24 September 2013 11:31:12PM
1 point
[-]
Not much. That is a good idea. I was considering hosting a workshop on rationality through the church. If I ever go through with it, that will probably be part of it. My parents' UU church had a class on what QM teaches us about theology and philosophy.
Comment author:TheOtherDave
24 September 2013 08:51:18PM
0 points
[-]
I'm not really invested enough in the question to debate it, but I know plenty of atheists (both with and without children) who are active members of UU churches because they get more of the things they value from a social community there than they do anywhere else, and this seems entirely sensible to me. What effects on your future children are you concerned about?
Comment author:Coscott
24 September 2013 10:27:39PM
1 point
[-]
I am concerned that they will treat supernatural claims as reasonable. I consider myself rational enough to be able to put up with some of the crazy stuff many UU individuals believe (beliefs not shared by the community). I am worried that my children might believe them, and even more worried that might not look at beliefs critically enough.
Comment author:TheOtherDave
25 September 2013 01:01:52AM
2 points
[-]
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don't.
If you live in the United States, the odds are high that your child's school will do the same thing.
My suggestion would be that you teach your children how to operate sensibly in such an environment, rather than try to keep them out of such environments, but of course parenting advice from strangers on the Internet is pretty much worthless.
Comment author:Coscott
25 September 2013 01:31:55AM
1 point
[-]
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don't.
I actually do not think that is true. They will treat supernatural claims as reasonable, but would not condemn me for not treating them as reasonable. They might condemn me for being avoidably rude, but I don't even know about that.
We actually plan on homeschooling, but that is not for the purpose of keeping kids out of an insane environment as much as trying to teach them actually important stuff.
Comment author:Prismattic
25 September 2013 03:11:00AM
1 point
[-]
If your elementary-schooler goes around insistently informing the other little kids that Santa isn't real, you will likely be getting an unhappy phone call from the school, never mind the religious bits that the adults actually believe.
Comment author:ChristianKl
24 September 2013 08:35:53PM
0 points
[-]
However, I am trying to decide if my morality has a problem with allying myself this community.
What's your moral system? If you get value from the community it's probably more moral to focus your efforts on donating more for bed nets than on the effect that you have on the world through being a member of that community.
Comment author:Coscott
24 September 2013 09:17:14PM
1 point
[-]
What's your moral system?
Wouldn't it be nice if I understood that?
I think it is not productive to analyze anything as being moral by comparing it to working for money for bed nets. Most everything fails.
I think I might have made a mistake in saying this was a moral issue. I think it is more of an identity issue. I the the consequences for the world of me being Unitarian are minimal. Most of the effect is on me. I think the more accurate questions I am trying to answer are:
Are Unitarians good under my morals? Do their shared values agree with mine enough that I should identify as being one?
I think the reason this is not a instrumental issue for me, and rather an epistemic issue, is because I believe the fact that I will continue to go to congregation is already decided. It is a fun bonding time which sparks lots of interesting philosophical discussion. If I were not in my current relationship, I would probably bring that question back on the table.
I realize that this does not change the fact that the answer is heavily dependent on my moral system, so I will try to comment on that with things that are specific to UU.
I generally agree with the 7 principles of UU, with far more emphasis on "A free and responsible search for truth and meaning." However, these principles are not particularly controversial, and I think most people would agree with most of them. The defining part of UU, I think, is the strategy of "Let's agree to disagree on the metaethics and metaphysics, and focus on the morals themselves which are what matters." I feel like this could be a good thing to do some of the time. Ignore the things that we don't understand and agree on, and work on making the world better using the values we do understand and agree on. However, I am concerned that perhaps the UU philosophy is not just to ignore the metaethics and metaphysics temporarily so we can work together, but rather to not care about these issues and not be bothered by the fact that we appear confused. This I do not approve of. These are important questions, and you don't know if what you don't know can't hurt you.
Comment author:Coscott
24 September 2013 09:41:07PM
0 points
[-]
They are important because they are confusing. Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list. I potentially lose a lot by not pushing these topics further.
Comment author:ChristianKl
24 September 2013 11:34:36PM
1 point
[-]
Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list.
For most people I don't think that meta ethical considerations have a huge effect on their day to day decision making.
Metaphysics seems interesting. Do you think that you might start believing in paranormal stuff if you spend more effort on investigating metaphysical questions? What other possible changes in your metaphysical position could you imagine that would have a huge effects on your decision making?
I potentially lose a lot by not pushing these topics further.
Going to UU won't stop you from discussing those concepts on LessWrong.
I'm personally part of diverse groups and don't expect any one group to fulfill all my needs.
Comment author:Coscott
24 September 2013 11:49:08PM
0 points
[-]
I do not think that I will start believing in paranormal stuff. I do not know what changes might arise from changes in my metaphysical position. I was not trying to single out these things as particularly important as much as I am just afraid of all things that I don't know.
Going to UU won't stop you from discussing those concepts on LessWrong.
I'm personally part of diverse groups and don't expect any one group to fulfill all my needs.
This is good advice. My current picture of UU is that it has a lot of problems, most of which are not problems for me personally, since I am also a rational person and in LW. I think UU and LW are the only groups which I am actively a part of other than my career. I wonder what other viewpoints I am missing out on.
The sympathetic nervous system activation that helps you tense up to take a punch or put on a burst of speed to outrun an unfriendly dog isn't quite so helpful when you're bracing to defend yourself against an intangible threat, like, say, admitting you need to change your mind.
Once of CFAR's instructors will walk participants through the biology of the fight/flight/freeze response and then run interactive practice on how to deliberately notice and adjust your response under pressure. The class is capped at 12, due to its interactive nature.
Comment author:benkuhn
24 September 2013 06:39:54PM
12 points
[-]
An iteration of this class was one of the high points of the May 2013 CFAR retreat for me. It was extraordinarily helpful in helping me get over various aversions, be less reactive and more agenty about my actions, and generally enjoy life more. For instance, I gained the ability to enjoy, or substantially increased my enjoyment of, several activities I didn't particularly like, including:
improv games
additional types of social dance
conversations with strangers
public speaking
It also helped substantially with CFAR's comfort zone expansion exercises. Highly recommended.
Comment author:benkuhn
25 September 2013 11:07:20PM
4 points
[-]
A bit. Most of the techniques were developed by one of the CFAR instructors, and I can't reproduce his instruction, nor do I want to steal his thunder. The closest thing you can find out more about is mindfulness-based stress reduction. (But the real value of the class is being able to practice with Val and ask him questions, which unfortunately I can't do justice to in a LW comment.)
Comment author:Torello
24 September 2013 01:46:00PM
15 points
[-]
Robin Hanson defines “viewquakes” as "insights which dramatically change my world view."
Are there any particular books that have caused you personally to experience a viewquake?
Or to put the question differently, if you wanted someone to experience a viewquake, can you name any books that you believe have a high probability of provoking a viewquake?
Comment author:pragmatist
26 September 2013 10:23:00PM
*
2 points
[-]
Reading Wittgenstein's Philosophical Investigations prompted the biggest viewquake I've ever experienced, substantially changing my conception of what a properly naturalistic worldview looks like, especially the role of normativity therein. I'm not sure I'd assign it a high probability of provoking a viewquake in others, though, given his aphoristic and often frustratingly opaque style. I think it worked for me because I already had vague misgivings about my prior worldview that I was having trouble nailing down, and the book helped bring these apprehensions into focus.
A more concrete scientific viewquake: reading Jaynes, especially his work on statistical mechanics, completely altered my approach to my Ph.D. dissertation (and also, incidentally, led me to LW).
Comment author:passive_fist
26 September 2013 02:35:44AM
*
1 point
[-]
The biggest world-shattering book for me was the classic, Engines of Creation by K. Eric Drexler. I was just 21 and the book had a large impact on me. Nowadays though, the ideas in the book are pretty mainstream, so I don't think it would have the same effect for a millenial.
While it's overoptimistic and generally a bit all over the place, Kurzweil's The Singularity is Near might still be the most bang for the book single introduction to the "humans are made of atoms" mindset you can throw at someone who is reasonably popular science literate but hasn't had any exposure to serious transhumanism.
It's kinda like how The God Delusion might not be the most deep book on the social psychology of religion, but it's still a really good book to give to the smart teenager who was raised by fundamentalists and wants to be deprogrammed.
Comment author:passive_fist
26 September 2013 08:55:27AM
0 points
[-]
After reading Engines of Creation, The Singularity is Near didn't have nearly as much effect on me. I just thought, "Well, duh" while reading it. I can imagine how it would affect someone with little exposure to transhumanist ideas though. I agree with you that it's a good choice.
I'm not sure if it is possible or has a high chance of success to give someone a book in the hope of provoking a viewquake. Most people would detect being influenced. Compare with trying to give people the bible to convert them doesn't work either even though it also could provoke a viewquake - after all the bible is also much different from other common literature.
To actually provoke a viewquake it must be a missing piece either connecting pieces or buildig on them and thus causing an aha moment. And the trouble is: This depends critically on your prior knowledge thus not every book will work on everyone.
I know of a few former-theists whose atheist tipping point was reading Susan Blackmore's The Meme Machine. I recall being fairly heavily influenced by this myself when I first read it (about twelve years ago, when it was one of only a small handful of popular books on memetics), but suspect I might find it a bit tiresome and erroneous if I were to re-read it.
Comment author:JoshuaZ
25 September 2013 02:06:19PM
3 points
[-]
Primarily how much biology and ecosystems could have largescale impacts on society and culture in ways which stayed around even after the underlying issue was no longer around. One of the examples there is how the prevalence of diseases (yellow fever, malaria especially) had long-term impacts on differences in North American culture in both the South and the North.
Comment author:FiftyTwo
24 September 2013 12:50:28PM
9 points
[-]
I have a half written post about the cultural divisions in the environmentalist movement that I intend to put on a personal blog in the nearish future. (Tl;Dr there "Green" groups who advocate different things in a very emotional/moral way vs. "Scientific" environmentalists)
I've been thinking about comparisons between the structure of that movement and how future movements might tackle other potential existential risks, specifically UFAI. Would people be interested in a post here specifically discussing that?
how future movements might tackle other potential existential risks, specifically UFAI
Is there anything you've learnt that's particular about groups trying to tackle x-risk in particular? If not, you could just make a post describing what you've learnt about groups that challenge big problems. Generality at no extra cost.
Comment author:FiftyTwo
24 September 2013 06:49:45PM
*
0 points
[-]
Political and social movements as a whole are so massive and varied that I don't think I could really give much non-trivial analysis. I'm not sure there's really a separate category of 'big problem' that can be separated out, all movements think their problem is big, and all big problems are composed of smaller problems.
I make the comparison between UFAI and environmentalism because its probably the only major risk that presently is really in public consciousness,* so provides a model of how people will act in response. E.g. the solutions that technical experts favour may not be the ones that the public support even if they agree on the problem.
*A few decades ago nuclear weapons might have also been analogous, but, whether correctly or not, the public perception of their risk has diminished.
Comment author:FiftyTwo
24 September 2013 06:44:05PM
1 point
[-]
I wouldn't say misanthropic, maybe more a matter of scope insensitivity and an overromanticised view of the 'natural' state of the world. But I think they genuinely believe it would make humans better off, whereas truly misanthropic greens wouldn't care.
Comment author:fubarobfusco
24 September 2013 06:05:43PM
4 points
[-]
From what I can tell, it's actually a teeny-tiny number of people, but they get disproportional media coverage for reasons that should be obvious considering the interests of those doing the covering.
Comment author:blacktrance
25 September 2013 08:04:21PM
2 points
[-]
FWIW, while I've not met many misanthropic greens in real life, about half of the greens I've met on the Internet range from mildly to extremely misanthropic.
Comment author:cousin_it
24 September 2013 12:44:55PM
3 points
[-]
Is the problem of measuring rationality related to the problem of measuring programming skill? Both are notoriously hard, but I can't tell if they're hard for the same reason...
Comment author:cousin_it
24 September 2013 11:37:44AM
*
1 point
[-]
Ilya Shkrob's In The Beginning is an attempt to reconcile science and religion. It's the best such attempt that I've seen, better than I thought possible. If you enjoy "guru" writers like Eliezer or Moldbug, you might enjoy this too.
Comment author:cousin_it
24 September 2013 02:37:31PM
*
6 points
[-]
I haven't found one, so I'll try to summarize here:
"Prokaryotic life probably came to Earth from somewhere else. It was successful and made Earth into a finely tuned paradise. (A key point here is the role of life in preserving liquid water, but there are many other points, the author is a scientist and likes to point out improbable coincidences.) Then a tragic accident caused individualistic eukaryotic life to appear, which led to much suffering and death. Evolution is not directionless, its goal is to correct the mistake and invent a non-individualistic way of life for eukaryotes. Multicellularity and human society are intermediate steps to that goal. The ultimate goal is to spread life, but spreading individualistic life would be bad, the mistake has to be corrected first. Humans have a chance to help with that process, but aren't intended to see the outcome."
The details of the text are more interesting than the main idea, though.
Comment author:knb
25 September 2013 06:38:39AM
3 points
[-]
I like this. Like all good religion, it's an idea which feels true and profound but is also clearly preposterous.
It reminds me of some concepts in animes I liked, like the Human Instrumentality Project in Neon Genesis Evangelion and the Ragnarok Connection in Code Geass.
Comment author:fubarobfusco
24 September 2013 06:12:14PM
3 points
[-]
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
(Aleister Crowley made similar remarks about individuality as a disease of life in The Book of Lies, but didn't go so far as to attribute it to eukaryotes.)
Comment author:knb
25 September 2013 06:16:05AM
0 points
[-]
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
Well the relevant story (God banishing Adam and Eve from the Garden of Eden) is in Genesis, so it's in the Torah as well. Gnostics considered the Fall a good thing--it freed humanity from the Demiurge's control.
Comment author:LM7805
24 September 2013 03:01:07PM
7 points
[-]
Hold on, is he trying to imply that prokaryotes aren't competitive? Not only does all single-celled life compete, it competes at a much faster pace than multicellular life does.
Comment author:Kaj_Sotala
25 September 2013 08:12:01AM
*
5 points
[-]
Based on that summary, I'd say that it's interesting because it draws on enough real science to be superficially plausible, while appealing to enough emotional triggers to make people want to believe in it enough that they'll be ready to ignore any inconsistencies.
Superficially plausible: Individuals being selfish and pursuing their own interest above that of others is arguably the main source of suffering among humans, and you can easily generalize the argument to the biosphere as a whole. Superorganisms are indeed quite successful due to their ability to suppress individualism, as are multi-celled creatures in general. Humans do seem to have a number of adaptations that make them more successful by reducing individualistic tendencies, and it seems plausible to claim that even larger superorganisms with more effective such adaptations could become the dominant power on Earth. If one thinks that there is a general trend of more sophisticated superorganisms being more successful and powerful, then the claim that "evolution is not directionless" also starts to sound plausible. The "humans have a chance to help with that process but aren't intended to see the outcome" is also plausible in this context, since a true intelligent superorganism would probably be very different from humanity.
"Evolution leads to more complex/intelligent creatures and humans are on top of the hierarchy" is an existing and widely believed meme that similarly created a narrative that put humans on top of the existing order, and this draws on that older meme in two ways: it feels plausible and appealing for many of the same reasons why the older meme was plausible, and anyone who already believed in the old meme will be more inclined to see this as a natural extension of the old one.
Emotional triggers: It constructs a powerful narrative of progress that places humans at the top of the current order, while also appealing to values related to altruism and sacrificing oneself for a greater whole, and providing a way to believe that things are purposeful and generally evolving towards the better.
The notion of a vast superorganism that will one day surpass and replace humanity also has the features of vastness and incomprehensibility, two features which Keltner and Haidt claim form the heart of prototypical cases of awe:
Vastness refers to anything that is experienced as being much larger than the self, or the self's ordinary level of experience or frame of reference. Vastness is often a matter of simply physical size, but it can also involve social size such as fame, authority, or prestige. Signs of vastness such as loud sounds or shaking ground, and symbolic markers of vast size such as a lavish office can also trigger the sense that one is in the presence of something vast. In most cases vastness and power are highly correlated, so we could have chosen to focus on power, but we have chosen the more perceptually oriented term "vastness" to capture the many aesthetic cases of awe in which power does not seem to be at work.
Accommodation refers to the Piagetian process of adjusting mental structures that cannot assimilate a new experience (Piaget & Inhelder, 1966/1969). The concept of accommodation brings together many insights about awe, such as that it involves confusion (St. Paul) and obscurity (Burke), and that it is heightened in times of crisis, when extant traditions and knowledge structures do not suffice (Weber). We propose that prototypical awe involves a challenge to or negation of mental structures when they fail to make sense of an experience of something vast. Such experiences can be disorienting or even frightening, as in the cases of Arjuna and St. Paul, since they make the self feel small, powerless, and confused. They also often involve feelings of enlightenment, and even rebirth, when mental structures expand to accomodate truths never before known. We stress that awe involves a need for accomodation, which may or may not be satisfied. The success of one's attempts at accomodation may partially explain why awe can be both terrifying (when one fails to understand) and enlightening (when one succeeds).
The more I think of it, the more impressive the whole thing starts to feel like, in the "memeplex that seems very effectively optimized for spreading and gaining loyal supporters" sense.
Comment author:Viliam_Bur
24 September 2013 09:38:22AM
*
8 points
[-]
Just thinking... could it be worth doing a website providing interesting parts of settled science for laypeople?
If we take the solid, replicated findings, and remove the ones that laypeople don't care about (because they have no use for them in everyday life)... how much would be left? Which parts of human knowledge would be covered most?
I imagine a website that would first provide a simple explanation, and then a detailed scientific explanation with references.
Why? Simply to give people idea that this is science that is useful and trustworthy -- not the things that are too abstract to understand or use, and not some new hypotheses that will be disproved tomorrow. Science, as a friendly and trustworthy authority. To get some respect for science.
Comment author:Moss_Piglet
24 September 2013 03:53:44PM
*
3 points
[-]
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they're advisers rather than leaders.
That attitude is mostly gone now, but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them. There seems to be a J curve between 100% scientific literacy and old-school Science-ism, and it seems to me at least that climbing back up to an elitist position is the option most likely to actually work in our lifetimes.
If anything, the more easily lay people can lay their hands on scientific materials the worse the situation is; the Dunning-Kruger effect and a lack of actual scientific training / mental ability means that laypeople are almost certain to misinterpret what they read in ways which disagree with the actual scientific consensus. Just look at the huge backlash against biology and psychometry these days; most of the people I've argued with in person or online have no actual qualifications but feel entitled to opinions on the issues because they stumbled through an article on pub-med and know the word methodology.
Comment author:satt
28 September 2013 02:57:30AM
2 points
[-]
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they're advisers rather than leaders.
That attitude is mostly gone now,
Is this true? It pattern matches to a generic things-were-better-in-the-old-days complaint and I'm not sure how one would get a systematic idea of how much people trusted science & scientists 100-200 years ago.
(Looking at the US, for instance, I only find results from surveys going back to the late 1950s. Americans' confidence in science seems to have fallen quite a lot between 1958 and 1971-2, probably mostly in the late 1960s, then rebounded somewhat before remaining stable for the last 35-40 years. I note that the loss of trust in science that happened in the 1960s wasn't science-specific, but part of a general loss of confidence experienced by almost all institutions people were polled about.)
but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them.
Comments (261)
Much to my surprise, Richard Dawkins and Jon Stewart had a fairly reasonable conversation about existential risk on the Sept. 24, 2013 edition of The Daily Show. Here's how it went down:
STEWART: Here's my proposal... for the discussion tonight. Do you believe that the end of our civilization will be through religious strife or scientific advancement? What do you think in the long run will be more damaging to our prospects as a human race?
In reply, Dawkins says Martin Rees (of CSER) thinks humanity has a 50% chance of surviving the 21st century, and one cause for such worry is that powerful technologies could get into the hands of religious fanatics. Stewart replies:
STEWART: ...[But] isn't there a strong probability that we are not necessarily in control of the unintended consequences of our scientific advancement?... Don't you think it's even more likely that we will create something [for which] the unintended consequence... is worldwide catastrophe?
DAWKINS: That is possible. It's something we have to worry about... Science is the most powerful to do whatever you want to do. If you want to do good, it's the most powerful way to do good. If you want to do evil, it's the most powerful way to do evil.
STEWART: ...You have nuclear energy and you go this way and you can light the world, but you go this [other] way, and you can blow up the world. It seems like we always try [the blow up the world path] first.
DAWKINS: There is a suggestion that one of the reasons that we don't detect extraterrestrial civilizations is that when a civilization reaches the point where it could broadcast radio waves that we could pick up, there's only a brief window before it blows itself up... It takes many billions of years for evolution to reach the point where technology takes off, but once technology takes off, it's then an eye-blink — by the standards of geological time — before...
STEWART: ...It's very easy to look at the dark side of fundamentalism... [but] sometimes I think we have to look at the dark side of achievement... because I believe the final words that man utters on this Earth will be: "It worked!" It'll be an experiment that isn't misused, but will be a rolling catastrophe.
DAWKINS: It's a possibility, and I can't deny it. I'm more optimistic than that.
STEWART: ... [I think] curiosity killed the cat, and the cat never saw it coming... So how do we put the brakes on our ability to achieve, or our curiosity?
DAWKINS: I don't think you can ever really stop the march of science in the sense of saying "You're forbidden to exercise your natural curiosity in science." You can certainly put the brakes on certain applications. You could stop manufacturing certain weapons. You could have... international agreements not to manufacture certain types of weapons...
And then the conversation shifted back to religion. I wish Dawkins had mentioned CSER's existence.
And then later in the (extended, online-only) interview, Stewart seemed unsure as to whether consciousness persisted after one's brain rotted, and also unaware that 10^22 is a lot bigger than a billion. :(
I'm beginning to think that we shouldn't be surprised by reasonably intelligent atheists having reasonable thoughts about x-risk. Both of the two reasonably intelligent, non-LWer atheists I talked to in the past few weeks about LW issues agreed with everything I said on them and said that it all seemed sensible and non-surprising. Most LW users started out as reasonably intelligent atheists. Where did the "zomg everyone is so dumb and only LW can think" meme originate from, exactly? Is there any hard data on this topic?
Jon's what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they're quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He's very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.
Every time he has one of these debates, I really want to sit down as moderator so I can translate each side, since they often talk past each other. Alas, it's a very time restricted format, and I've only seen him fact check on the fly once (Google, Wikipedia).
The number thing was at least partly a joke, along the lines of "bigger than 10 doesn't make much sense to me" - scope insensitivity humor. I've done similar before.
I'm seeing a lot of comments in which it is implicitly assumed that most everyone reading lives in a major city where transportation is trivial and there is plenty of memetic diversity. I'm wondering if this assumption is generally accurate and I'm just the odd one out, or if it's actually kinda fallacious.
(I can't seem to figure out poll formatting. Hm.)
A lot of the CFAR/MIRI core lives in Berkeley.
A city of ~200,000 people if you include the outlying rural areas, in which you can go from the several block wide downtown to farmland in 4-5 miles in the proper directions. Fifteen minutes from another city of 60,000 which is very much a state college town. Forty minutes away from a city of nearly 500,000 people.
Granted the city of ~200,000 has a major university and a number of biotech companies.
I think living in a big city is the standard that most people here consider normal. It's like living in the first world. We know that there are people from India who visit but we still see being from the first world as normal.
When you have the choice between living in a place with memetic diversity or not living in such a place the choice seems obvious.
Its somewhat inaccurate in my case (I live in the suburbs of a semi-major city).
I've been working on a series of videos about prison reform. During my reading, I came across an interesting passage from wikipedia:
What struck me was how preferable these punishments (except the hanging, but that was very rare) seem compared to the current system of massive scale long-term imprisonment. I would much rather pay damages and be whipped than serve months or years in jail. Oddly, most people seem to agree with Wikipedia that whipping is more "severe" than imprisonment of several months or years (and of course, many prisoners will be beaten or raped in prison). Yet I think if you gave people being convicted for theft a choice, most of them would choose the physical punishment instead of jail time.
It's not about harshness but about the concept of the important for physical integrity for human dignity.
Isn't freedom important for human dignity? It seems that any kind of punishment infringes on human dignity to some extent. Also, remember that prisoners are often subject to beatings and rape by other prisoners or guards--something which is widely known.
According to the standard moral doctrine it's not as central as bodily integrity. The state is allowed to take away freedom of movement but not bodily integrity or force people to work as slaves.
That's a feature of the particular way a prison is run.
There is a "standard moral doctrine"??
Yes, I consider things like the UN charter of human rights the standard moral doctrine.
Don't look at it from the perp point of view, look at it from an average-middle-class-dude or a suburban-soccer-mom point of view.
If there's a guy who, say, committed a robbery in your neighborhood, physical punishment may or may not deter him from future robberies. You don't know and in the meantime he's still around. But if that guy gets sent to prison, the state guarantees that he will not be around for a fairly long time.
That is the major advantage of prisons over fines and/or physical punishments.
This is totally obvious, I'm not sure why you felt you needed to point that out.
The point of my comment is that it is interesting that prison isn't viewed as cruel, even though it's obviously more harsh than alternatives. Obviously there are other reasons people prefer prison as a punishment for others.
That's only an advantage if the expected cost to society of keeping him in prison is less than the expected cost (broadly construed) to society of him keeping on robbing.
The relevant part: "look at it from an average-middle-class-dude or a suburban-soccer-mom point of view".
They do have political power and they don't do expected-cost-to-society calculations.
I guess I just hadn't interpreted "point of view" close enough to literally.
On the other hand, making people spend long periods of time in a low-trust environment surrounded by criminals seems to be a rather effective way of elevating recidivism when they do get out, so the advantage as implemented in our system is on rather tenuous footing.
And of course, the prison system comes with the major disadvantage that imprisoning people is a highly expensive punishment to implement.
I am not arguing that prisons are the proper way to deal with crime. All I'm saying is that arguments in favor of imprisonment as the preferred method of punishing criminals exist.
well, short of death.
Death is an existential punishment :-/
I'm reminded of the perennial objections to Torture vs Dust Specks to the effect that torture is a sacred anti-value which simply cannot be evaluated on the same axis as non-torture punishments (such as jail time, presumably), regardless of the severities involved..
Dunno about that -- peak-end rule.
There's a post on Overcoming Bias about this here.
The key quote, "Incarceration destroys families and jobs, exactly what people need to have in order to stay away from crime." If we had wanted to create a permanent underclass, replacing corporal punishment with prison would have been an obvious step in the process.
Obviously that's not why people find imprisonment so preferable to torture, though; TheOtherDave's "sacred anti-value" explanation is correct there. It would be interesting to know exactly how a once-common punishment became seen as unambiguously evil, though, in the face of "tough on crime" posturing, lengthening prison sentences, etc.
Because corporal punishment is an ancient display of power; the master holding the whip and the servant being punished for misbehavior. It's obviously effective, and undoubtedly more humane than incarceration, but it's also anathema to the morality of the "free society" where everyone is supposed to be equal and thus no-one can hold the whip.
(Heck, even disciplining a child is considered grounds to put the kid in foster care; if you want corporal punishment v incarceration, that's a hell of a dichotomy. And for every genuinely abused kid CPS saves, how many healthy families get broken up again?)
The idea is childish and unrealistic, but nonetheless popular because it plays on the fear and resentment people feel towards those above them. And in a democracy, popular sentiment is difficult to defeat.
Maybe it's a part of human hypocrisy: we want to punish people, but in a way that doesn't make our mirror neurons feel their pain. We want people to be punished, without thinking about ourselves as the kind of people who want to harm others. We want to make it as impersonal as possible.
So we invent punishments that don't feel like we are doing something horrible, and yet are bad enough that we would want to avoid them. Being locked behind bars for 20 years is horrible, but there is no speficic moment that would make an external observer scream.
It is, incidentally, not obvious to everyone that the desire to create a stable underclass didn't drive our play a significant role in our changing attitudes towards prisons... in fact, it's not even obvious to me, though I agree that they didn't play a significant role in our changing attitudes towards torturing criminals.
Why is "downvoted" so frequently modified by "to oblivion"? Can we please come up with a new modifier here? This is already a dead phrase, a cliche which seems to get typed without any actual thought going into it. Wouldn't downvoting "to invisibility" or "below the threshold" or even just plain "downvoting", no modifier, make a nice change?
I prefer 'to oblivion' over all your suggested alternatives. Why do you think it should change?
Slang vocabulary tends to become more consistent and repetitive over time in my experience. New phrases will appear and then go to fixation until everyone uses them. The only answer is to try to be as creative as possible in your own word choices.
The Relationship Escalator-- an overview of assumptions about relationships, and exceptions to the assumptions. The part that surprised me was the bit about the possibility of dialing back a relationship without ending it.
I ate something I shouldn't have the other day and ended up having this surreal dream where Mencius Moldbug had gotten tired of the state of the software industry and the Internet and had made his personal solution to it all into an actual piece of working software that was some sort of bizarre synthesis of a peer-to-peer identity and distributed computing platform, an operating system and a programming language. Unfortunately, you needed to figure out an insane system of phoneticized punctuation that got rewritten into a combinator grammar VM code if you wanted to program anything in it. I think there even was a public Github with reams of code in it, but when I tried to read it I realized that my computer was actually a cardboard box with an endless swarm of spiders crawling out of it while all my teeth were falling out, and then I woke up without ever finding out exactly how the thing was supposed to work.
Welcome to Urbit
I love the smell of Moldbug in the morning.
For an example of fully rampant Typical Mind Fallacy in Urbit, see the security document. About two-thirds of the way down, you can actually see Yarvin transform into Moldbug and start pontificating on how humans communicating on a network should work, and never mind the observable evidence of how they actually have behaved whenever each of the conditions he describes have obtained.
The very first thing people will do with the Urbit system is try to mess with its assumptions, in ways that its creators literally could not foresee (due to Typical Mind Fallacy), though they might have been reasonably expected to (given the real world as data).
I love those dream posts in the open threads.
Note that <explaining-the-joke>rirelguvat hc gb gur pbzchgre orvat n pneqobneq obk vf yvgrenyyl gehr.</explaining-the-joke>
I think that he actually implemented the spiders.
Video playback speed was mentioned on the useful habits repository thread a few weeks ago and I asked how I could do the same. Youtube's playback speed option is not available on all videos. Macs apparently have a plug-in you can download, I don't own a mac so that's not helpful. You could download the video then play it back, but that wastes time. I just learned a solution that works across all OS' with out the need to download the video first.
copy youtube url, ctrl v on vlc mainscreen
Petrov Day: http://lesswrong.com/lw/jq/926_is_petrov_day/
Does "Don't judge me for X" mean "Don't reduce my status in your mind to account for X"?
I think it means "Don't treat me as a stranger about whom all you knew was x"
I think it means "Don't update your opinion of me on the basis of evidence X".
[LINK] A day in the life of an NPC. http://www.npccomic.com/2011/10/19/beautiful-day/?utm_source=PW%2B&utm_medium=468x60&utm_content=Beauty&utm_campaign=PW%2B468x60%2BBeauty%2B
Less Wrong and its comments are a treasure trove of ethical problems, both theoretical and practical, and possible solutions to them (the largest one to my knowledge; do let me know if you are aware of a larger forum for this topic). However, this knowledge is not easy to navigate, especially to an outsider who might have a practical interest in it. I think this is a problem worth solving and one possible solution I came up with is to create a StackExchange-style service for (utilitarian, rationalist) ethics. Would you consider such a platform for ethical questions to be useful? Would you participate?
Possible benefits:
Making existing problems and their answers easier to navigate through the use of tagging and a stricter question-answer format.
Accumulation of new interesting problems.
The closest I have found is http://philosophy.stackexchange.com/questions/tagged/ethics, which doesn't appear to be very active and it being a part of a more traditional philosophy forum might be a hindrance.
Edit: a semi-relevant example.
Robin Williams is transhumanism friendly.
Anyone here familiar enough with General Semantics and willing to write an article about it? Preferably not just a few slogans, but also some examples of how to use it in real life.
I have heard it mentioned a few times, and it sounds to me a bit LessWrongish, but I admit I am too lazy now to read a whole book about it (and I heard that Korzybski is difficult to read, which also does not encourage me).
I just started rereading Science and Sanity and maybe the project will develop into a lesswrong post.
When it comes to Korzybski being difficult to read I think it's because the idea he advocates are complex.
As he writes himself:
It's a bit like learning a foreign language in a foreign language. In some sense that seems necessary. A lot of dumb down elements of General Semantics made it into popular culture but the core seems to be intrinsicly hard.
Non-violent communication is the intellectual heir of E-prime which was the heir of semantic concerns in General Semantics. Recent books on the subject are well reviewed. It is a useful tool in communicating across large value rifts.
Does Rosenberg cite Bourland (or Korzybski) anywhere? I thought these were independent inventions that happened upon some tangential ideas about non-judgmental thinking.
I had thought that there was a link in someone Rosenberg worked with developing it but now I can't find anything. The elimination of the "to-be" verb forms does not seem explicit in NVC methodology. I think you are correct and they are independent.
I don't think it makes sense to speak of a single framework as the heir of General Semantics. General Semantics influenced quite a lot.
General Semantics itself is quite complex. Nonviolent communication is pretty useless when you want to speak about scientific knowledge. General Semantics notions of thinking about relations and structure are on the other hand are quite useful.
A personal anecdote I'd like to share which relates to the recent polyphasic sleep post ( http://lesswrong.com/lw/ip6/polyphasic_sleep_seed_study_reprise/ ): My 7 year old son who always tended to sleep long and late seems to have developed segmented sleep by himself in the last two weeks. He claims to wake e.g. at 3:10 AM gets dressed, butters his school bread - and gets to bed again - in our family bed. It's no joke. He lies dressed in bed and his satchel is packed. And the interesting thing is: He is more alert and less bad tempered than before. He doesn't do afternoon naps though - at least none that I know of.
What can have caused this? Maybe the seed was that our children were always allowed to come into the family bed in the night (but only in the night) which they did often.
I remember reading somewhere (sorry, no link) that waking up at the midnight, and then going to sleep again after an hour or so, was considered normal a few hundred years ago. Now this habit is gone, probably because we make the night shorter using artificial lights.
Yes. I know. See e.g. http://en.wikipedia.org/wiki/Segmented_sleep I knew that beforehand. That was the reason I wasn't worried when my children woke up at night and crawled into our family bed (some other parents seem to worry.about the quality of their childrens sleep).
But I'm surprised that he actually segmented and that it went this far. I understood that artificial lighting - and we have enough of that - suppresses this segmentation.
Perhaps it is not the light per se, but the fact that when you stay awake at evening, and wake up on alarm clock in the morning, the body learns to give up the segmented sleep to protect itself from the sleep deprivation. Maybe the time interval for your children between going to sleep and having to wake up is large enough.
Possibly. But he has been a late riser always and he doesn't really go to sleep earler than before. In fact he get earler than before. But maybe his sleep pattern just changes due to normal development.
My older son (9 years) also sometimes gets up in the night to visit the family bed. But I guess he is not awake long. He likes to build things and read or watch movies (from our file server) until quite late in the evening (often 10 PM). We allow that because he has no trouble getting up early.
Do I have a bias or useful heuristic? If a signal is easy to fake, is it a bias to assume that it is disingenuous or is it an useful heuristic?
I read Robert Hanson's post about why there are so many charities specifically focusing on kids and he basically summed it up as signalling to seem kind, for potential mates, being a major factor. There were some good rebuttals in the comment sections but whether or not signalling is at play is not the point, I'm sure to a certain degree it is, how much? I don't know. The point is that I automatically dismiss the authenticity of a signal if the signal is difficult to authenticate. In this example it is possible for people to both, signal that they care about children for a potential mate, as well as actually really caring about children ( e.g. innate emotional response).
EDIT: Just to be clear, this is a question about signalling and how I strongly associate easy to fake signals with dishonest signalling, not about charities.
That's like asking whether someone is a freedom fighter or a terrorist.
Every heuristic involves a bias when you use it in some contexts.
Yes, but does it more often yield a satisfactory solution across many contexts if yes, then I'd label it a useful heuristic and if it is often wrong I would label it a bias.
You're not using your words as effectively as you could be. Heuristics are mental shortcuts, bias is a systematic deviation from rationality. A heuristic can't be a bias, and a bias can't be a heuristic. Heuristics can lead to bias. The utility of a certain heuristic might be evaluated based on an evaluation of how much computation using the heuristic saves versus how much bias using the heuristic will incur. Using a bad heuristic might cause an individual to become biased, but the heuristic itself is not a bias.
I agree with your last sentence. The important thing should be how much good does the charity really do to those children. Are they really making their lives better, or is it merely some nonsense to "show that we care"?
Because there are many charities (at least in my country) focusing on providing children things they don't really need; such as donating boring used books to children in orphanages. Obviously, "giving to children in orphanages" is a touching signal of caring, and most people don't realize that those children already have more books than they can read (and they usually don't wish to read the kind of books other people are throwing away, because honestly no one does). In this case, the real help to children in orphanages would be trying to change the legislation to make their adoption easier (again, this is an issue in my country, in your part of the world the situation may be different), helping them avoid abuse, or providing them human contact and meaningful activities. But most people don't care about the details, not even enough to learn those details.
I suspect there's also some sentimentality about books in play.
Yes, throwing a book away is nearly like burning it. Giving it to an orphanage is completely guilt free.
This depends on what you mean by "care", i.e., they care about children in the sense that they derive warm fuzzies from doing things that superficially seem to help them. They don't care in the sense that they aren't interested in how much said actions actually help children (or whether they help them at all).
I think that most people just never question the effectivity of the charities they donate to. It's a charity for xxx, of course it helps xxx!
And yet they question the effectivity of the things they do for themselves.
How comes practitioners of (say) homoeopathy haven't all gone bankrupt then?
Just because you question something, doesn't mean you reach the correct answer.
Well, because that's in near mode.
If I do something for myself, and there is no obvious result, I see that there is no obvious result, and i disappoints me. If I do something for other people, there is always an obvious result: I feel better about myself.
This is more or less the distinction I was going for.
Why isn't this equally true for doing things for oneself?
Because other people reward you socially for doing things for other people. If you do something good for person A, it makes sense for a person A to reward you -- they want to reinforce the behavior they benefit from. But it also makes sense for an unrelated person B to reward you, despite not benefiting from this specific action -- they want to reinforce the general algorithm that makes you help other people, because who knows, tomorrow they may benefit from the same algorithm.
The experimental prediction of this hypothesis is that the person B will be more likely to reward you socially for helping person A, if the person B believes they belong to the same reference class as person A (and thus it is more likely that an algorithm benefiting A would also benefit B).
Now who would have a motivation to reward you for helping yourself? One possibility is a person who really loves you; such person would be happy to see you doing things that benefit you. Parents or grandparents may be in that position naturally.
Another possibility is a person who sees you as a loyal member of their tribe, but not a threat. For such person, your success is a success of the tribe is their success. They benefit from having stronger allies; unless those allies becoming strong changes their position within the tribe. So one would help members of their tribe who are significantly weaker... or perhaps even significantly stronger... in either case the tribe becomes stronger and the relative position within the tribe is not changed. The first part is teachers helping their students, or tribe leaders helping their tribe except for their rivals; the second part is average tribe members supporting their leader.
Again, the experimental prediction would be that when you join some "tribe", the people stronger than you will support you at the beginning, but then will be likely to stab you in the back when you reach their level.
Now, how to use this knowledge for your success in the real life. We are influenced by social rewards whether we want it or not. One strategy could be trying to reward myself indirectly -- for example make a commitment that when I make something useful for myself, I will reward myself by exposing myself to a friendly social interaction. Second strategy is to find company of people who love me, by using "do they reward me for helping myself?" as a filter. (Problem is how to tell a difference between these people, and those that reward me for being a weak member of their tribe, and will later backstab me when I become stronger.) Third strategy is to find company of people much stronger than me with similar values. (And not forget to switch to even stronger people when I become strong.) Another strategy could be to join a group that feels far from the victory... a group that is still in the "conquering the world" mode, not in the "sharing the spoils" mode. (Be careful when the group reaches some victories.)
wow this is an insanely better version of my comment.
Anecdotal verification: one of my friends said that when he was running out of money, it made sense for him to buy meals for other people. Those people didn't reciprocate, but third parties were more likely to help him.
Then I guess people from CFAR should go to some universities and give lectures about... effective altruism. (With the expected result that the students will be more likely to support CFAR and attend their seminars.) Or I could try this in my country when recruiting for my local LW group.
I guess it also explains why religious groups focus so much on charity. It is difficult to argue against a group that many people associate with "helping others", even if other actions of the group hurt others. The winning strategy is probably making the charity 10% of what you really do, but 90% of what other people associate with you.
EDIT: Doing charity is the traditional PR activity of governments, U.N., various cults and foundations. I feel like reinventing the wheel again. The winning strategies are already known and fully exploited. I just didn't recognize them as viable strategies for everyone including me, because I was successfully conditioned to associate them with someone else.
Among other things, charity is a show of strength.
Because it's considered good to even try to help someone else so you care less about outcomes. EG donating to charity is a good act regardless of whether you check to see if your donation saved a life. On the other hand, doing something for yourself that has no real benefits is viewed as a waste of time.
It seems to be pretty well decided that (as opposed to directly promoting Less Wrong, or Rationality in general), spreading HPMoR is a generally good idea. What are the best ways to go about this, and has anyone undertaken a serious effort?
I came to the conclusion, after considering creating some flyers to post around our meetup's usual haunts, that online advocacy would be much more efficient and cost effective. Then, after thinking that promotion on large sites with high signal to noise is mostly useless, realized that sharing among smaller communities that you are already a part of (game/specific interest forums, Facebook groups, etc.) might increase likelihood of a clickthrough, due to an even modest amount of social clout and in-group effect (as opposed to creating an account just to spam). And, posting (and bumping) is a very trivial inconvenience - but if you are still held back by the effort of creating a blurb, I'm happy to provide the one I used.
Convince me of this claim that you think is well decided.
I am not convinced that from the viewpoint of a non-rationalist that HPMoR doesn't have many of the same problems as Spock. I can see many people reading the book, feeling that HP is too "evil," and deciding that "rationality" is not for them.
Also, EY said "Authors of unfinished stories cannot defend themselves in the possible worlds where your accusation is unfair." This should swing both ways. If it turns out that HP goes crazy because he was being meta and talking to himself too much, then spreading HPMoR is probably not as good an idea.
When it comes to typical online forums signatures are a good way to promote things. Take a quote of HPMOR and attach a link to it.
Of course, you should only do this where the forum has made the foolish choice to allow signatures. (One of the things I appreciate about Reddit/LW compared to forums is how they strongly discourage signatures.)
This got me to read it. Quote was about only wanting to rule the world to get more books or something to that effect.
Why are there so few people living past 115?
There's an annoying assumption that no parent would want their child to have a greatly extended lifespan, but I think it's a reasonable overview otherwise, or at least I agree that there's not going to be a major increase in longevity without a breakthrough. Lifestyle changes won't do it.
Poll Question: What are communities are you active in other than Less Wrong?
Communities that you think are closely related to Less Wrong are welcome, but I am also wondering what other completely unrelated groups you associate with. How do you think such communities help you? Are there any that you would recommend to an arbitrary Less Wronger?
Orthogonal to LW, I'm very active in my university's Greek community, serving as VP of a fraternity. It's been excellent social training and I've had a very positive experience.
I'm active in UK competitive debating (mainly real life, but I also run some discussion forums).
[Good question. Its interesting to see the variety of people's responses.]
I'm pretty active in lots of social activist/environmentalist/anarchist groups. I sometimes join protests for recreational reasons.
Could you give examples?
I'm active in Toastmasters and martial arts (mostly the community of my specify school). Overall Toastmasters seems pretty effective at its stated goals of improving public speaking and leadership skills. Its also fun (at least for me). Additionally, both force me to actually interact with other people, which is nice and not something that the rest of my live provides.
The only two communities I am currently active in right now (other than career/family communities) are Less Wrong and Unitarian Universalism.
In the past had a D&D group that I participated very actively in. I think that the people I played D&D with in high school had a very big and positive effect on my development.
I think that I would like to and am likely to develop a local community of people to play strategy board games in the future.
I'm active in (though not really a member of) the "left-libertarian" community, associated with places like Center for a Stateless Society (though I myself am not an anarchist) and Bleeding Heart Libertarians. I'm also a frequent reader and occasional commenter on EconLog.
Less related, I'm an active poster on GameFAQs and on a message board centered around the Heroes of Might and Magic game series.
I also used to be active on GameFAQs. For about a year in 2004 it was most of my internet activity, specifically the Pikmin boards. That was a long time ago though when I was a high school freshman.
My local hackerspace, and broadly the US and European hacker communities. This is mainly because information security is my primary focus, but I find myself happier interacting with hackers because in general they tend not only to be highly outcome-oriented (i.e., inherently consequentialist), but also pragmatic about it: as the saying goes, there's no arguing with a root shell. (Modulo bikeshedding, but this seems to be more of a failure mode of subgroups that don't strive to avoid that problem.) The hacker community is also where I learned to think of communities in terms of design patterns; it's one of the few groups I've encountered so far that puts effort into that sort of community self-evaluation. Mostly it helps me because it's a place where I feel welcome, where other people see value in the goals I want to achieve and are working toward compatible goals. I'd encourage any instrumental rationalist with an interest in software engineering, and especially security, to visit a hackerspace or attend a hacker conference.
Until recently I was also involved in the "liberation technology" activism community, but ultimately found it toxic and left. I'm still too close to that situation to evaluate it fairly, but a lot of the toxicity had to do with identity politics and status games getting in the way of accomplishing anything of lasting value. (I'm also dissatisfied with the degree to which activism in general fixates on removing existing structures rather than replacing them with better ones, but again, too close to evaluate fairly.)
Do you mean online communities or IRL?
Both
Contra dance. Closely correlated with LessWrong; also correlated with nerdy people in general. I would recommend it to most LessWrongers; it's good even for people who are not generally good at dancing, or who have problems interacting socially. (Perhaps even especially for those people; I think of it as a 'gateway dance.')
Other types of dance, like swing dance. Also some correlation with LessWrong, somewhat recommended but this depends more on your tastes. Generally has a higher barrier to entry than contra dancing.
I am actually planning on having a contra dance at my wedding.
I'm going to second Contra Dance. It's really fun and easy to start while having a decent learning curve such that you don't hit a skill ceiling fast. Plus you meet lots of people and interact with them in a controlled, friendly, cooperative fun fashion.
I did that for a while. It was popular at mathcamp so I started, but I haven't done it recently. Maybe I'll start again.
Is there a name for this following bias?
So I've debated a lot of religious people in my youth, and a common sort of "inferential drift", if you can call if that, is that they believe that if you don't think something is true or doesn't exist, then this must mean that you don't want said thing to be true or to exist. It's like a sort of meta-motivated reasoning; they are falsely attributing your conclusions due to motivated reasoning. The most obvious examples are reading any sort of Creationist writing that critiques evolution, where they pretty explicitly attribute accepting the theory of evolution to a desire for god to not exist.
I've started to notice it in many other highly charged, mind-killing topics as well. Is this all in my head? Has anyone else experienced this?
I've heard it called "psychologizing".
This seems pretty close to a Bulverism: http://en.wikipedia.org/wiki/Bulverism
That does seem close to Bulverism. But what I described seem to be happening at a subconscious bias level, where people are somewhat talking past each other due to a sort of hidden assumption of Bulverism.
Then perhaps...
If someone else accuses you of engaging in motivated reasoning that's ad hominem.
No, that is a mere assertion (which may or may not be true). If they claimed that he is wrong because he is engaging in motivated reasoning, then that would be ad hominem.
Wait, what? This might be a little off topic, but if you assert that they lack evidence and are drawing conclusions based on motivated reasoning, that seems highly relevant and not ad hominem. I guess it could be unnecessary, as you might try to focus exactly on their evidence, but it would seem reasonable to look at the evidence they present, and say "this is consistent with motivated reasoning, for example you describe many things that would happen by chance but nothing similar contradictory, so there seems to be some confirmation bias" etc.
I used to get a lot of people telling me I was an atheist because I either didn't want there to be a god or because I wanted the universe to be logical (granted, I do want that, but they meant it in the pejorative Vulcan-y sense). I eventually shut them up with "who doesn't want to believe they're going to heaven?" but it took me a while to come up with that one.
I don't understand it either, but this is a thing people say a lot.
I'm back in school studying computer science (with a concentration in software engineering), but plan on being a competent programmer by the time I graduate, so I figure I need to learn lots of secondary and tertiary skills in addition to those that are actually part of the coursework. In parallel to my class subjects, I plan on learning HTML/CSS, SQL, Linux, and Git. What else should be on this list?
Preliminaries: Make sure you can touch type, being able to hit 50+ wpm without sweat makes it a lot easier to whip up a quick single-screen test program to check up something. Learn a text editor with good macro capabilities, like Vim or Emacs, so you can do repetitive structural editing of text files without having to do every step by hand. Get into the general habit of thinking that whenever you find yourself doing several repetitive steps by hand, something is wrong and you should look into ways to automate the loop.
Working with large, established code bases, like Vladimir_Nesov suggested, is what you'll probably end up doing a lot as a working programmer. Better get used to it. There are many big open-source projects you can try to contribute to.
Unit tests, test-driven development. You want the computer to test as much of the program as possible. Also look into the major unit testing frameworks for whatever language you're working on.
Build systems, rigging up a complex project to build with a single command line command. Also look into build servers, nightly builds and the works. A real-world software project will want a server that automatically builds the latest version of the software every night and makes noise to the people responsible if it won't build, or if an unit test fails.
Oh, and you'll want to know a proper command line for that. So when learning Linux, try to do your stuff in the command line instead of sticking to the GUI. Figure out where the plaintext configuration files driving whatever programs you use live and how to edit them. Become suspicious about software that doesn't provide plaintext config files. Learn about shell scripting and onliners, and what the big deal in Unix about piping output from one program to the next is.
Git is awesome. After you've figured out how to use it on your own projects, look into how teams use it. Know what people are talking about when they talk about a Git workflow. Maybe check out Gerrit for a collaborative environment for developing with Git. Also check out how bug tracking systems and how those can tie into the version control.
For the social side of software development, Peopleware is the classic book. Producing Open Source Software is also good.
Know some full stack of web development. If you want a web domain running a neat webapp, how would you go about getting the domain, arranging for the hosting, installing the necessary software on the computer, setting up the web framework and generating the pages that do the neat thing? Can you do this by rolling your own minimal web server instead of Apache and your own minimal web framework instead of whatever out of the box solution you'd use? Then learn a bit about the out of the box web server and web framework solutions.
Have a basic idea about the JavaScript ecosystem for frontend web development.
Look into cloud computing. It's new enough not to have made it into many curricula yet. It's probably not going to go away anytime soon. How would you use it, why would you want to use it, when would you not want to use it? Find out why map-reduce is cool.
Learn how the Internet works. Learn why people say that the Internet was made by pros and the web was made by amateurs. Learn how to answer the interview question "What happens between typing an URL in the address field and the web page showing up in the browser" in as much detail as you can.
Look into the low-level stuff. Learn some assembly. Figure out why Forth is cool by working through the JonesForth tutorial. Get an idea how computers work below the OS level. The Elements of Computing Systems describes this for a toy computer. Read up on how people programmed a Commodore 64, it's a lot easier to understand than a modern PC.
Learn about the difference between userland and kernel space in Linux, and how programs written (in assembly) right on top of the kernel work. See how the kernel is put together. See if you can find something interesting to develop in the kernel-side code.
Learn out how to answer the interview question "What happens between pressing a key on the keyboard and a letter showing up on the monitor" in as much detail as you can.
Write a simple ray-tracer and a simple graphics program that does something neat with modern OpenGL and shaders. If you want to get really crazy with this, try writing a demoscene demo with lots of graphical effects and a synthesized techno soundtrack. If you want even crazier, try to make it a 4k intro.
Come up with a toy programming language and write a compiler for it.
Write a toy operating system. Figure out how to make a thing that makes a PC boot off the bare iron, prints "Hello world" on the screen and doesn't do anything beyond that. Then see how far you can get in making the thing do other things.
Also this list looks pretty good.
Regarding touch-typing, do you find yourself reaching 'top speed' often while programming?
It's not really about typing large amounts of text quickly, it's basically about
(1) not having to pay attention to the keyboard, your fingers should know what do without taking up mindspace; and
(2) your typing being able to keep up with your thinking -- the less your brain has to stop and wait for fingers to catch up, the better.
Yes, this is a critical skill. Especially when someone is learning programming, it is so sad to see their thinking interrupted all the time by things like: "when do I find the '&' key on my keyboard?", and when the key is finally found, they already forgot what they wanted to write.
This part is already helped by many development environments, where you just write a few symbols and press Ctrl+space or something, and it completes the phrase. But this helps only with long words, not with symbols.
It's not the top speed, it's the overhead. It is incredibly irritating to type slowly or make typos when you're working with a REPL or shell and are tweaking and retrying multiple times: you want to be thinking about your code and all the tiny niggling details, and not about your typing or typos.
For a decent summary, here's a pretty well-written survey paper on cloud computing.. It's three years old now, but not outdated.
It's a good start, but I notice a lack of actual programming languages on that list. This is a very common mistake. A typical CS degree will try to make sure that you have at least basic familiarity with one language, usually Java, and will maybe touch a bit on a few others. You will gain some superpowers if you become familiar with all or most of the following:
A decent scripting language, like Python or Ruby. The usual recommendation is Python, since it has good learning materials and an easy learning curve, and it's becoming increasingly useful for scientific computing.
A lisp. Reading Structure and Interpretation of Computer Programs will teach you this, and a dizzying variety of other things. It may also help you achieve enlightenment, which is nice. Seriously, read this book.
Something low-level, usually C.
Something super-low-level: an assembly language. You don't have to be good at writing in it, but you should have basic familiarity with the concepts. Fun fact: if you know C, you can get the compiler to show you the corresponding assembly.
You should take the time to go above-and-beyond in studying data structures, since it's a really vital subject and most CS graduates' intuitive understanding of it is inadequate. Reading through an algorithms textbook in earnest is a good way to do this, and the wikipedia pages are almost all surprisingly good.
When you're learning git, get a GitHub account, and use it for hosting miscellaneous projects. Class projects, side projects, whatever; this will make acquiring git experience easier and more natural.
I'm sure there's more good advice to give, but none of it is coming to mind right now. Good luck!
Sorry if I wasn't clear. I intended the list to include only skills that make you a more valuable programmer that aren't explicitly taught as part of the degree. Two Java courses (one object-oriented) are required as is a Programming Languages class that teaches (at least the basics of) C/C++, Scheme, and Prolog. Also, we must take a Computer Organization course that includes Assembly (although, I'm not sure what kind). Thanks for the advice.
I've TAed a class like the Programming Languages class you described. It was half Haskell, half Prolog. By the end of the semester, most of my students were functionally literate in both languages, but I did not get the impression that the students I later encountered in other classes had internalized the functional or logical/declarative paradigms particularly well -- e.g., I would expect most of them to struggle with Clojure. I'd strongly recommend following up on that class with SICP, as sketerpot suggested, and maybe broadening your experience with Prolog. In a decade of professional software engineering I've only run into a handful of situations where logic programming was the best tool for the job, but knowing how to work in that paradigm made a huge difference, and it's getting more common.
In school you are typically taught making small projects. Make a small algorithm, or a small demonstration that you can display an information in an interactive user interface.
In real life (at least in my experience), the applications are typically big. Not too deep, but very wide. You don't need complex algorithms; you just have dozens of dialogs, hundreds of variables and input boxes, and must create some structure to prevent all this falling apart (especially when the requirements keep changing while you code). Also you have a lot of supporting functionality in a project (for example: database connection, locking, transactions, user authentification, user roles and permissions, printing, backup, export to pdf, import from excel, etc.). Again, unless you have structure, it falls apart. And you must take good care of many things that may go wrong (such as: if the user's web browser crashes, so the user cannot explicitly log out of the system, the edited item should not remain locked forever).
To be efficient at this, you also need to know some tools for managing projects. Some of those tools are Java-specific, so your knowledge of Java should include them; they are parts of the Java ecosystem. You should use javadoc syntax to write comments; JUnit to write unit tests; Maven to create and manage projects, some tools to check your code quality, and perhaps even Jenkins for continuous integration. Also the things you already have on your list (HTML, CSS, SQL, git) will be needed.
To understand creating web applications in Java, you should be able to write your own servlet, and perhaps even write your own JSP tag. Then all the frameworks are essentially libraries built on this, so you will be able to learn them as needed.
As an exercise, you could try to write a LessWrong-like forum in Java (with all its functionality; of course use third-party libraries where possible); with javadoc and unit tests. If you can do that, you are 100% ready for the industry (the next important skill you will need is leading a team of people who don't have all of these skills yet, and then you are ready for the senior position). But that can take a few months of work.
There is another aspect of working on big projects that seems equally important. What you are talking about I'd call "design", the skill of organizing the code (and more generally, the development process) so that it remains intelligible and easy to teach new tricks as the project grows. It's the kind of thing reading SICP and writing big things from scratch would teach.
The other skill is "integration", ability to open up an unfamiliar project that's too big to understand well in a reasonable time, and figure out enough about it to change what you need, in a way that fits well into the existing system. This requires careful observation, acting against your habits, to conform to local customs, and calibration of the sense of how well you understand something, so that you can judge when you've learned just enough to do your thing right, but no less and not much more. Other than on a job, this could be learned by working a bit (not too much on each one, lest you become comfortable) on medium/large open source projects (implementing new features, not just fixing trivial bugs), possibly discarding the results of the first few exercises.
I am wondering what a PD tournament would look like if the goal was to maximize the score of the group rather than the individual player. For some payoff matrices, always cooperate trivially wins, but what if C/D provides a greater net payoff than C/C, which in turn is higher than D/D? Does that just devolve to the individual game? It feels like it should, but it also feels like giving both players the same goal ought to fundamentally change the game.
I haven't worked out the math; the thought just struck me while reading other posts.
The Prisoner's Dilemma is technically defined as requiring that this not be the case, precisely so that one doesn't ahve to consider the case (in iterated games) where the players agree to take turns cooperating and defecting. You are considering a related but not identical game. Which is of course totally fine, just saying.
If you allow C/D to have a higher total than CC, then it seems there is a meta-game in coordinating the taking-turns - "cooperating" in the meta-game takes the form of defecting only when it's your turn. Then, the players maximise both their individual scores and the group score by meta-cooperating.
The game you are talking about should not be called PD.
The solution will be for everyone to pick randomly, (weighted based on the difference in C/C and D/D payoff) until they get a C/D outcome, and then just picking the same thing over and over. (This isn't a unique solution, but it seems like a Schelling point to me.)
I was wondering if anyone had any opinions/observations they would be would be willing to share about Unitarian Universalism. My fiancee is an atheist and a Unitarian Universalist, and I have been going to congregation with her for the last 10 months. I enjoy the experience. It is relaxing for me, and a source of interesting discussions. However, I am trying to decide if my morality has a problem with allying myself this community. I am leaning towards no. I feel like they are doing a lot of good by providing a stepping stone out of traditional religion for many people. I am however slightly concerned about what effect this community might have on my future children. I would love to debate this issue with anyone who is willing, and I think that would be very helpful for me.
The UU "Seven Principles and Purposes" seem like a piece of virtue ethics. If you don't mind this particular brand of it, then why not.
From Wikipedia:
If you discard the ornamental fluff in this "philosophy" and "focus on making this life better for all of us", then it's as good a guideline as any.
As I said in responding to another comment, this is the part of UU that I relate to. However, the problem is that while UUs might be slightly above average rationality, "we can use reason when we can" means that beliefs come from thinking for yourself as opposed to reading e.g. the bible, and the stuff they come up with by thinking for themselves is usually not all that great by my standards. I am worried that I am giving UU too much credit because they happen to use the word "reason," when in reality they mean something very different than what I mean.
They are just humans, aren't they? I am afraid that at this moment it is impossible to assemble a large group of people who would all think on LW-level. Not including obvious bullshit, or at least not making it a core of group beliefs, is already a pretty decent result for a large group of humans.
Perhaps one day CFAR will make a curricullum that can replicate rationality quickly (at least on suitable individuals) and then we can try to expand rationality to mass level. Until then, having a group without obviously insane people in power is probably the best you can get.
You already reflected on this, so just: don't emotionally expect what is not realistic. They are never going to use reason as you define it. But the good news is that they will not punish you for using reason. Which is the best you can expect from a religious group.
I found this comment very helpful. Thanks.
You inspired me to google whether there are UU in Slovakia. None found, although there are some in the neighbor countries: Czech, Hungary.
I wonder whether it would be possible to create a local branch here, to draw people, who just want to feel something religious but don't want to belong to a strict organization, away from Catholicism (which in my opinion has huge negative impacts on the country). There seem to be enough such people here, but they are not organized, so they usually stay within the churches of their parents.
The problem is, I am not the right person to start something like this, because I don't feel any religious need; for me the UU would be completely boring and useless. I am not sure if I could pretend interest at least for long enough to collect a group of people, make them interested in the idea, put them into contact with neighbor UUs, and then silently sneak away. ;-)
Also, I suspect the religion is not about ideas, but about organized community. (For example, the only reason you are interested in UU is because your fiancee is. And your fiancee probably has similar reasons, etc.) Starting a new religious community where no support exists, would need a few people willing to sacrifice a lot of time and work -- in other words, true believers. Later, when the community exists, further recruitment should be easier.
Well, at least this is the first social engineering project I feel I could have higher than 1% chance of doing successfully, if I decided to. (Level 3 of Yudkowsky Ambition Scale in a local scope?)
Here are some things you should know:
Unitarian Universalism is different from Unitarianism. UU is basically a spin-off of Unitarianism from when they combined with Universalism in 1961 in North America. As a result, there are very few UU churches outside of NA.
Unitarianism is on average more Christian than UU, and there exist some UU congregations that also have a Christian slant. (The one I was talking about is not one of them) I have also heard that some UU churches are considerably more tolerant of everything other than Christianity than they are of Christianity. (Probably because their members were escaping Christianity) The views change from congregation to congregation because they are decided from the bottom up from the local congregants.
The UUA has free resources, such as transcribed sermons you could read, for people who wanted to start a congregation.
I think I gain some stuff from it that is not directly from my fiancee. I don't know if it is enough to continue going on my own. It is a community that roughly follows strategy 1 of the belief signalling trilemma, which I think is nice to be in some of the time. The sermons are usually way too vague, but have produced interesting thoughts when I added details to them on my own and then analyzed my version. There is also (respectful) debating, which I think I find fun regardless of who I am debating with. I like how it enables people to share significant highs or lows in their life, so the community can help them. There are pot-lucks and game nights, and courses on philosophy and religions. There is also singing, which I am not so crazy about, but my fiancee loves.
What do you mean and what do they mean by "reason"? If you are not sure, maybe it's something to ask at the next meeting.
They are reaching many of the wrong conclusions. I think this might be because their definition of "use reason" is just to think about their beliefs, which is not enough. When I say "use reason," I mean thinking about my beliefs in a specific way. That specific way is something that I think a lot of us have roughly in common on less wrong, and it would take to long to describe all the parts of it now. To point out a specific example, one UU said to me "There are some mysteries we can never get answers to, like what happens when we die," and then later "I am a firm believer in reincarnation, because I have had experiences where I felt my past lives." I never questioned to her that she had those experiences, and argued a bit and was able to get her to change her first statement, because reincarnation experiences were evidence against it, which I thought was an improvement. However, not noticing how contradictory these beliefs were is not something I would call "reason."
Perhaps what is bothering me is a difference in cognitive ability, and UUs version of "reason" is as much as I can expect from the average person. Or, perhaps these are people who are genuinely interested in being rational, and would be very supportive of learning how, but have not yet learned. It could also be that they just want to say that they are using "reason."
Do you guys discuss Effective Altruism? It could be one way to inject a bit more reason.
Not much. That is a good idea. I was considering hosting a workshop on rationality through the church. If I ever go through with it, that will probably be part of it. My parents' UU church had a class on what QM teaches us about theology and philosophy.
I'm not really invested enough in the question to debate it, but I know plenty of atheists (both with and without children) who are active members of UU churches because they get more of the things they value from a social community there than they do anywhere else, and this seems entirely sensible to me. What effects on your future children are you concerned about?
I am concerned that they will treat supernatural claims as reasonable. I consider myself rational enough to be able to put up with some of the crazy stuff many UU individuals believe (beliefs not shared by the community). I am worried that my children might believe them, and even more worried that might not look at beliefs critically enough.
Yes, they will treat supernatural claims as reasonable, and expect you (and your kids) to treat them that way as well, at least in public, and condemn you (and your kids) for being rude if you (they) don't.
If you live in the United States, the odds are high that your child's school will do the same thing.
My suggestion would be that you teach your children how to operate sensibly in such an environment, rather than try to keep them out of such environments, but of course parenting advice from strangers on the Internet is pretty much worthless.
I actually do not think that is true. They will treat supernatural claims as reasonable, but would not condemn me for not treating them as reasonable. They might condemn me for being avoidably rude, but I don't even know about that.
We actually plan on homeschooling, but that is not for the purpose of keeping kids out of an insane environment as much as trying to teach them actually important stuff.
I do, however, agree with your advice.
If your elementary-schooler goes around insistently informing the other little kids that Santa isn't real, you will likely be getting an unhappy phone call from the school, never mind the religious bits that the adults actually believe.
Good thing we are homeschooling then!
What's your moral system? If you get value from the community it's probably more moral to focus your efforts on donating more for bed nets than on the effect that you have on the world through being a member of that community.
Wouldn't it be nice if I understood that?
I think it is not productive to analyze anything as being moral by comparing it to working for money for bed nets. Most everything fails.
I think I might have made a mistake in saying this was a moral issue. I think it is more of an identity issue. I the the consequences for the world of me being Unitarian are minimal. Most of the effect is on me. I think the more accurate questions I am trying to answer are:
Are Unitarians good under my morals? Do their shared values agree with mine enough that I should identify as being one?
I think the reason this is not a instrumental issue for me, and rather an epistemic issue, is because I believe the fact that I will continue to go to congregation is already decided. It is a fun bonding time which sparks lots of interesting philosophical discussion. If I were not in my current relationship, I would probably bring that question back on the table.
I realize that this does not change the fact that the answer is heavily dependent on my moral system, so I will try to comment on that with things that are specific to UU.
I generally agree with the 7 principles of UU, with far more emphasis on "A free and responsible search for truth and meaning." However, these principles are not particularly controversial, and I think most people would agree with most of them. The defining part of UU, I think, is the strategy of "Let's agree to disagree on the metaethics and metaphysics, and focus on the morals themselves which are what matters." I feel like this could be a good thing to do some of the time. Ignore the things that we don't understand and agree on, and work on making the world better using the values we do understand and agree on. However, I am concerned that perhaps the UU philosophy is not just to ignore the metaethics and metaphysics temporarily so we can work together, but rather to not care about these issues and not be bothered by the fact that we appear confused. This I do not approve of. These are important questions, and you don't know if what you don't know can't hurt you.
Why are metaphysics important?
Why are metaethics important?
They are important because they are confusing. Of all the things that might possibly cause a huge change to my decision making, I think understanding open questions about anthropic reasoning is probably at the top of the list. I potentially lose a lot by not pushing these topics further.
For most people I don't think that meta ethical considerations have a huge effect on their day to day decision making.
Metaphysics seems interesting. Do you think that you might start believing in paranormal stuff if you spend more effort on investigating metaphysical questions? What other possible changes in your metaphysical position could you imagine that would have a huge effects on your decision making?
Going to UU won't stop you from discussing those concepts on LessWrong.
I'm personally part of diverse groups and don't expect any one group to fulfill all my needs.
I do not think that I will start believing in paranormal stuff. I do not know what changes might arise from changes in my metaphysical position. I was not trying to single out these things as particularly important as much as I am just afraid of all things that I don't know.
This is good advice. My current picture of UU is that it has a lot of problems, most of which are not problems for me personally, since I am also a rational person and in LW. I think UU and LW are the only groups which I am actively a part of other than my career. I wonder what other viewpoints I am missing out on.
CFAR has a class on handling your fight/flight/freeze reaction this Saturday Sept 28th.
The sympathetic nervous system activation that helps you tense up to take a punch or put on a burst of speed to outrun an unfriendly dog isn't quite so helpful when you're bracing to defend yourself against an intangible threat, like, say, admitting you need to change your mind.
Once of CFAR's instructors will walk participants through the biology of the fight/flight/freeze response and then run interactive practice on how to deliberately notice and adjust your response under pressure. The class is capped at 12, due to its interactive nature.
Would you be able to post a summary for people unable to attend? I find the topic very interesting, but habitually reside in a different continent,.
An iteration of this class was one of the high points of the May 2013 CFAR retreat for me. It was extraordinarily helpful in helping me get over various aversions, be less reactive and more agenty about my actions, and generally enjoy life more. For instance, I gained the ability to enjoy, or substantially increased my enjoyment of, several activities I didn't particularly like, including:
It also helped substantially with CFAR's comfort zone expansion exercises. Highly recommended.
For those of us who can't be in Berkeley in < 1 week's notice, can you go into more detail on the methods?
A bit. Most of the techniques were developed by one of the CFAR instructors, and I can't reproduce his instruction, nor do I want to steal his thunder. The closest thing you can find out more about is mindfulness-based stress reduction. (But the real value of the class is being able to practice with Val and ask him questions, which unfortunately I can't do justice to in a LW comment.)
Robin Hanson defines “viewquakes” as "insights which dramatically change my world view."
Are there any particular books that have caused you personally to experience a viewquake?
Or to put the question differently, if you wanted someone to experience a viewquake, can you name any books that you believe have a high probability of provoking a viewquake?
Understanding Power by Noam Chomsky.
Reading Wittgenstein's Philosophical Investigations prompted the biggest viewquake I've ever experienced, substantially changing my conception of what a properly naturalistic worldview looks like, especially the role of normativity therein. I'm not sure I'd assign it a high probability of provoking a viewquake in others, though, given his aphoristic and often frustratingly opaque style. I think it worked for me because I already had vague misgivings about my prior worldview that I was having trouble nailing down, and the book helped bring these apprehensions into focus.
A more concrete scientific viewquake: reading Jaynes, especially his work on statistical mechanics, completely altered my approach to my Ph.D. dissertation (and also, incidentally, led me to LW).
The Sequences.
The biggest world-shattering book for me was the classic, Engines of Creation by K. Eric Drexler. I was just 21 and the book had a large impact on me. Nowadays though, the ideas in the book are pretty mainstream, so I don't think it would have the same effect for a millenial.
While it's overoptimistic and generally a bit all over the place, Kurzweil's The Singularity is Near might still be the most bang for the book single introduction to the "humans are made of atoms" mindset you can throw at someone who is reasonably popular science literate but hasn't had any exposure to serious transhumanism.
It's kinda like how The God Delusion might not be the most deep book on the social psychology of religion, but it's still a really good book to give to the smart teenager who was raised by fundamentalists and wants to be deprogrammed.
After reading Engines of Creation, The Singularity is Near didn't have nearly as much effect on me. I just thought, "Well, duh" while reading it. I can imagine how it would affect someone with little exposure to transhumanist ideas though. I agree with you that it's a good choice.
A microecon textbook given to a reflective person.
Against Intellectual Monopoly converted me from being strongly in favor of modern copyright to strongly against it.
I'm not sure if it is possible or has a high chance of success to give someone a book in the hope of provoking a viewquake. Most people would detect being influenced. Compare with trying to give people the bible to convert them doesn't work either even though it also could provoke a viewquake - after all the bible is also much different from other common literature. To actually provoke a viewquake it must be a missing piece either connecting pieces or buildig on them and thus causing an aha moment. And the trouble is: This depends critically on your prior knowledge thus not every book will work on everyone.
Compare with http://en.wikipedia.org/wiki/Zone_of_proximal_development
If someone will actually get through the density of the text, moldbug has been known to provoke a few viewquakes.
I know of a few former-theists whose atheist tipping point was reading Susan Blackmore's The Meme Machine. I recall being fairly heavily influenced by this myself when I first read it (about twelve years ago, when it was one of only a small handful of popular books on memetics), but suspect I might find it a bit tiresome and erroneous if I were to re-read it.
I tried to read it a few years after reading a bunch of dawkins and found it hard to get through
The Anti-Christ would be my #1 pick, for both versions of the question. Stumbling on Happiness is a good second choice though.
The Feynman Lectures on Computation did this for me by grounding computability theory in physics.
"1493" and "The Better Angels of Our Nature"
What was the viewquake for you in 1943?
Primarily how much biology and ecosystems could have largescale impacts on society and culture in ways which stayed around even after the underlying issue was no longer around. One of the examples there is how the prevalence of diseases (yellow fever, malaria especially) had long-term impacts on differences in North American culture in both the South and the North.
I have a half written post about the cultural divisions in the environmentalist movement that I intend to put on a personal blog in the nearish future. (Tl;Dr there "Green" groups who advocate different things in a very emotional/moral way vs. "Scientific" environmentalists)
I've been thinking about comparisons between the structure of that movement and how future movements might tackle other potential existential risks, specifically UFAI. Would people be interested in a post here specifically discussing that?
If you haven't yet read Neal Stephenson's Zodiac, I recommend it.
As an aside, I find it convenient to think of a significant part of environmentalism as purely religious movement.
Thats a good analogy. By recycling plastic bottles you are displaying your virtue, whatever the extent of the practical consequences.
Is there anything you've learnt that's particular about groups trying to tackle x-risk in particular? If not, you could just make a post describing what you've learnt about groups that challenge big problems. Generality at no extra cost.
Political and social movements as a whole are so massive and varied that I don't think I could really give much non-trivial analysis. I'm not sure there's really a separate category of 'big problem' that can be separated out, all movements think their problem is big, and all big problems are composed of smaller problems.
I make the comparison between UFAI and environmentalism because its probably the only major risk that presently is really in public consciousness,* so provides a model of how people will act in response. E.g. the solutions that technical experts favour may not be the ones that the public support even if they agree on the problem.
*A few decades ago nuclear weapons might have also been analogous, but, whether correctly or not, the public perception of their risk has diminished.
Yes. As I see, a lot of Greens are Misanthropes. Do you cover this aspect?
I wouldn't say misanthropic, maybe more a matter of scope insensitivity and an overromanticised view of the 'natural' state of the world. But I think they genuinely believe it would make humans better off, whereas truly misanthropic greens wouldn't care.
From what I can tell, it's actually a teeny-tiny number of people, but they get disproportional media coverage for reasons that should be obvious considering the interests of those doing the covering.
FWIW, while I've not met many misanthropic greens in real life, about half of the greens I've met on the Internet range from mildly to extremely misanthropic.
Sometimes the whole internet seems to be filled by misanthropic people, so I am not sure how much evidence this is about misanthropy of greens.
Is the problem of measuring rationality related to the problem of measuring programming skill? Both are notoriously hard, but I can't tell if they're hard for the same reason...
I think they're different, though with some overlap.
Rationality applies to a much wider range of subjects, and involves dealing with much more uncertainty.
Ilya Shkrob's In The Beginning is an attempt to reconcile science and religion. It's the best such attempt that I've seen, better than I thought possible. If you enjoy "guru" writers like Eliezer or Moldbug, you might enjoy this too.
Is there a summary available?
I haven't found one, so I'll try to summarize here:
"Prokaryotic life probably came to Earth from somewhere else. It was successful and made Earth into a finely tuned paradise. (A key point here is the role of life in preserving liquid water, but there are many other points, the author is a scientist and likes to point out improbable coincidences.) Then a tragic accident caused individualistic eukaryotic life to appear, which led to much suffering and death. Evolution is not directionless, its goal is to correct the mistake and invent a non-individualistic way of life for eukaryotes. Multicellularity and human society are intermediate steps to that goal. The ultimate goal is to spread life, but spreading individualistic life would be bad, the mistake has to be corrected first. Humans have a chance to help with that process, but aren't intended to see the outcome."
The details of the text are more interesting than the main idea, though.
I like this. Like all good religion, it's an idea which feels true and profound but is also clearly preposterous.
It reminds me of some concepts in animes I liked, like the Human Instrumentality Project in Neon Genesis Evangelion and the Ragnarok Connection in Code Geass.
Sounds like an attempt to reconcile, not science and religion in general, but specifically science and the Christian concepts of the Fall and original sin; or possibly some sort of Gnosticism.
(Aleister Crowley made similar remarks about individuality as a disease of life in The Book of Lies, but didn't go so far as to attribute it to eukaryotes.)
Well the relevant story (God banishing Adam and Eve from the Garden of Eden) is in Genesis, so it's in the Torah as well. Gnostics considered the Fall a good thing--it freed humanity from the Demiurge's control.
Holy crap that's easily the stupidest thing I've read this week.
Downvoted for insult + not giving a reason.
Hold on, is he trying to imply that prokaryotes aren't competitive? Not only does all single-celled life compete, it competes at a much faster pace than multicellular life does.
Yeah, I know. I don't agree with the text, but I think it's interesting anyway.
What makes it interesting?
Based on that summary, I'd say that it's interesting because it draws on enough real science to be superficially plausible, while appealing to enough emotional triggers to make people want to believe in it enough that they'll be ready to ignore any inconsistencies.
Superficially plausible: Individuals being selfish and pursuing their own interest above that of others is arguably the main source of suffering among humans, and you can easily generalize the argument to the biosphere as a whole. Superorganisms are indeed quite successful due to their ability to suppress individualism, as are multi-celled creatures in general. Humans do seem to have a number of adaptations that make them more successful by reducing individualistic tendencies, and it seems plausible to claim that even larger superorganisms with more effective such adaptations could become the dominant power on Earth. If one thinks that there is a general trend of more sophisticated superorganisms being more successful and powerful, then the claim that "evolution is not directionless" also starts to sound plausible. The "humans have a chance to help with that process but aren't intended to see the outcome" is also plausible in this context, since a true intelligent superorganism would probably be very different from humanity.
"Evolution leads to more complex/intelligent creatures and humans are on top of the hierarchy" is an existing and widely believed meme that similarly created a narrative that put humans on top of the existing order, and this draws on that older meme in two ways: it feels plausible and appealing for many of the same reasons why the older meme was plausible, and anyone who already believed in the old meme will be more inclined to see this as a natural extension of the old one.
Emotional triggers: It constructs a powerful narrative of progress that places humans at the top of the current order, while also appealing to values related to altruism and sacrificing oneself for a greater whole, and providing a way to believe that things are purposeful and generally evolving towards the better.
The notion of a vast superorganism that will one day surpass and replace humanity also has the features of vastness and incomprehensibility, two features which Keltner and Haidt claim form the heart of prototypical cases of awe:
The more I think of it, the more impressive the whole thing starts to feel like, in the "memeplex that seems very effectively optimized for spreading and gaining loyal supporters" sense.
I'd add slow-to-moderated paced, low-pitched sounds to the list of vastness indicators.
I'm not sure about music with fast heavy bass rhythm, though that may also be a sort of vastness.
Just thinking... could it be worth doing a website providing interesting parts of settled science for laypeople?
If we take the solid, replicated findings, and remove the ones that laypeople don't care about (because they have no use for them in everyday life)... how much would be left? Which parts of human knowledge would be covered most?
I imagine a website that would first provide a simple explanation, and then a detailed scientific explanation with references.
Why? Simply to give people idea that this is science that is useful and trustworthy -- not the things that are too abstract to understand or use, and not some new hypotheses that will be disproved tomorrow. Science, as a friendly and trustworthy authority. To get some respect for science.
People used to respect Science, as an abstract mysterious force which Scientists could augur and even use to invoke the odd miracle. In a way, people in the nineteenth and early twentieth centuries saw Scientists in a similar way to how pre-Christian Europe saw priests; you need one on hand when you make a decision, and contradict them at your peril, but ultimately they're advisers rather than leaders.
That attitude is mostly gone now, but it could be useful to bring it back. Ordinary people are not going to provide useful scientific insights or otherwise helpfully (1) participate in the process, so keeping them out of the way and deferential is going to be more valuable then trying to involve them. There seems to be a J curve between 100% scientific literacy and old-school Science-ism, and it seems to me at least that climbing back up to an elitist position is the option most likely to actually work in our lifetimes.
Is this true? It pattern matches to a generic things-were-better-in-the-old-days complaint and I'm not sure how one would get a systematic idea of how much people trusted science & scientists 100-200 years ago.
(Looking at the US, for instance, I only find results from surveys going back to the late 1950s. Americans' confidence in science seems to have fallen quite a lot between 1958 and 1971-2, probably mostly in the late 1960s, then rebounded somewhat before remaining stable for the last 35-40 years. I note that the loss of trust in science that happened in the 1960s wasn't science-specific, but part of a general loss of confidence experienced by almost all institutions people were polled about.)
Citizen science seems like evidence against this idea.
I disagree. I strongly disapprove of treating scientists as high priests of mystical higher knowledge inaccessible to mere mortals.