All of bartimaeus's Comments + Replies

I agree, and I'll keep that in mind. The topic is extremely broad, though, so I don' t know how much time I'll have to focus on it. I'm actually thinking of having several meetups on this, depending on people's interest.

I don't have time to write a full report, but Less Wrong Montreal had a meetup on the 22nd of January that went well. Here's the handout that we used; the exercise didn't work out too well because we picked an issue that we all mostly agreed on and understood pretty well. A topic where we disagreed more would have been more interesting (afterwards I thought "free will" might have been a good one).

I run the Montreal Less Wrong meetup, which for the last few months has started structuring the content of our meetups with varying degrees of success.

This was the first meetup that was posted to meetup.com in an effort to find some new members. There were about 12 of us, most of which were new and had never heard of Less Wrong before; although this was a bit more than I was expecting, the meetup was still a really good introduction to Less Wrong/rationality and was appreciated by all those that were present.

My strategy for the meetup was to show a concre... (read more)

0bartimaeus
I don't have time to write a full report, but Less Wrong Montreal had a meetup on the 22nd of January that went well. Here's the handout that we used; the exercise didn't work out too well because we picked an issue that we all mostly agreed on and understood pretty well. A topic where we disagreed more would have been more interesting (afterwards I thought "free will" might have been a good one).
4Ben_LandauTaylor
That handout is excellent. If anyone is an organizer looking for a topic, you could totally just steal this one.

I like this idea; seeing as I have a meetup report to post, I just started a monthly Meetup Report Thread. Hopefully, people will do what you describe.

That's true, those points ignore the pragmatics of a social situation in which you use the phrase "I don't know" or "There's no evidence for that". But if you put yourself in the shoes of the boss instead of the employee (in the example given in "I don't know"), where even if you have "no information" you still have to make a decision, then remembering that you probably DO know something that can at least give you an indication of what to do, is useful.

The points are also useful when the discussion is with a rationalist.

The post What Bayesianism Taught Me is similar to this one; your post has some elements that that one doesn't have, and that one has a few that you don't have. Combining the two, you end up with quite a nice list.

5ChrisHallquist
I want to like that post, because the formatting is so much tidier than the formatting on my post, but I actually disagree with the first two points. I'm in favor of just rolling with the fact that "Bayesian evidence" isn't what we ordinarily mean by "evidence," as useful as the former is. Also, Eliezer's "I don't know" post misses the pragmatics of saying, "I don't know"; we say "I don't know" if we don't have any information the other person is going to care about (the other person usually won't care that there are 10-1000 apples in a tree outside).

I think "seems like a cool idea" covers that; it doesn't say anything about expected results (people could specify).

I don't see how, because the barriers aren't clearly defined, they become irrelevant. There might not be a specific point where a mind is sentient or not, but that doesn't mean all living things are equally sentient (Fallacy of Grey).

I think Armstrong 4, rather than make his consideration for all living things uniform, would make himself smarter and try to find an alternate method to determine how much each living creature should be valued in his utility function.

How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?

1Mestroyer
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I'm not sure to what extent I would be glad some creature was happy, or to what extent I'd be mad at it for killing everyone else, though.

Ok, I see what your concern is, with the hype around Soylent everyone's opinion is skewed (even if they're not among the fanboys).

You decided above that it wasn't worth your time to try your own self-experiments with it. What if someone else were to take the time to do it? I like the concept but agree with the major troubles you listed above, and I have no experience with designing self-experiments. But maybe I'll take the time to try and do it properly, long-term, with regular blood tests, noting what I've been eating for a couple months before starting, taking data about my fitness levels, etc. Of course, I would need to analyze the risk to myself beforehand.

4gwern
If they actually go through with it and write it up, that's better than the status quo, yes. But if they don't determine to go through with it and may give up, it's another selection bias, specifically, publication bias (person A does a self-experiment but halfway through runs out of spare effort and abandons it; person B, by chance, gets better results and blogs about it etc).

What would you like to see done differently? You mentioned the more thorough self-experimentation he could have done (really should have done), but there's still someone else who could step up to the plate and do some self-testing.

Thorough studies? Those might also be done some time in the future, whether or not they're funded by Rob (not sure about this point, there might not be an incentive to do so once it's being sold).

Sure, Rob jumped the gun and hyped it up. But most of the internet is already a giant circle-jerk. Doesn't stop people from generating real information, right?

4gwern
That's the damnable thing about these sorts of biases, it's not clear to me whether one can compensate for the biases. If you pay attention to the 'real information', you may wind up learning what we might call anti-information - information that predictably and systematically makes your beliefs worse than your defaults. This is the problem of the clever arguer: how do you, and can you, adjust for the fact that all the reports coming out about Soylent are so deeply error-prone (and now with the kickstarter, we get a delicious cherry on top of conflicts of interest)? I would much rather have a handful of randomized self-experiments from some obscure blogger interested in a weird recipe he came up with than a forum of Soylent enthusiasts raving to each other that the latest formulation on sale is the greatest ever and telling each other that if you feel bad you should be eating your Soylent twice a day and not three times a day, don't you know about intermittent fasting?, and also I ran 20 seconds faster today, so Soylent must be working for me!

I don't have enough experience to even give an order of magnitude, but maybe I can give an order of magnitude of the order of magnitude:

Right now, the probability of Christianity specifically might be somewhere around 0.0000001% (that's probably too high even). One hour post judgement-day, it might rise to somewhere around 0.001% (several orders of magnitude).

Now let's say the world continues to burn, I see angels in the sky, get to talk to some of them, see dead relatives (who have information that allows me to verify that they're not my own hallucinatio... (read more)

The continuation of the burning makes the hallucination hypothesis less probable, for as long as it continues. Also, if it continues past the laws of physics, as you point out.

What do you expect will happen? Do you think lots of people are going to get very sick by going on a Soylent-only diet immediately, not monitoring their health closely, and ending up with serious nutritional deficiencies? That's one of the more negative scenario, but I honestly don't know how likely that is. I think people are likely to do at least one of three things:

  • Monitor their health more closely (especially on a soylent-only diet),
  • Only replace a few meals with Soylent (not more than, say, 75%),
  • Return to normal food or see a doctor if a serious
... (read more)
8gwern
I think there will be a range of issues from a few diehards hitting serious issues to people just having low-grade issues which they don't notice because they won't be randomizing blocks, effects similar to the hedonic treadmill will make it hard to compare over time, they'll get initial benefits from the usual placebo/Hawthorne/overjustification effects, and subjective self-rating has many known loopholes where you can think you're getting better even as you're actually getting worse - but regardless of the exact distribution or what the worst-cases look like, we won't know for the reasons I list above. Instead, we'll get another internet circle-jerk about how Soylent is awesome and the critics are wrong.

The concept is good, but the methodology could have been significantly better. It has lots of potential, and the real danger is limited to those that will be consuming ONLY Soylent for extended periods. Using it to replace a meal or two a day, and having a complete meal every day, shouldn't be dangerous (I think).

What confuses me about the negativity is, what's so bad about the current situation? The earliest of adopters will serve as a giant trial, and if there are problems they'll come up there.

Also: people who intend to switch to JUST soylent should ... (read more)

What confuses me about the negativity is, what's so bad about the current situation? The earliest of adopters will serve as a giant trial, and if there are problems they'll come up there.

No, they won't. Or, if they are interpretable as a trial, it'll be as the worst epidemiological survey ever run - no blinding, no followup, response bias out the wazoo, attrition, expectancy and Hawthorne effects already built in etc etc. You name a bias, this ('hand out goodies and hope someone will report problems') will have it. You ever wonder why we have things lik... (read more)

Beware of identifying in general. "We" are all quite different. Few if any of "us" can be considered reasonably rational by the standards of this site.

That's a good point, which I'll watch out for in the future.

With a sizable minority of theists here, why is this even an issue, except maybe for some heavily religious newcomers?

One thing I didn't specify is that this applies to discussions with non-LessWrongers about religion (or about LessWrong). On the site, there's no point in bothering with this identification process, because we're more likely to notice that we're generalizing and ask for an elaboration.

I'm thinking of making a Discussion post about this, but I'm not sure if it has already been mentioned.

We're not atheists - we're rationalists.

I think it's worth distinguishing ourselves from the "atheist" label. On the internet, and in society (what I've seen of it, which is limited), the label includes a certain kind of "militant atheist" who love to pick fights with the religious and crusade against religion whenever possible. The arguments are, obviously, the sames ones being used over and over again, and even people who would ide... (read more)

0ChristianKl
Where the probability at the moment? How high would it rise for experiencing 1 hour of judgment day, the world burning around you?
2CAE_Jones
Living in rural America, where Atheism is still technically illegal in some places even though no one would dare enforce it, I think distinguishing the labels "rational thinkers" from "atheists" is a very good idea. I don't think someone who considers themselves rational and theist would be particularly proud to associate with the label that best fits their particular brand of theism (Roman Catholicism and Mormonism seem to spawn subverters of this expectation, but reducing to the common category of "christian" seems to invoke way more cultural baggage). Or rather, I wouldn't dare call myself a christian or an atheist anywhere anyone could possibly find out about. Smart people would dismiss me as inferior for the former, 90% of people within a 200mi radius would start hurling crosses at me for the latter. Probably will need allies in both groups, so I'm kinda concerned about this whole labels thing.
2Nornagest
I'm comfortable calling myself an atheist (though I rarely need to), but only because I believe in zero gods and that qualifies me for the label in the eyes of almost everyone. In other words, I treat "atheist" as a feature of my worldview, not as an identity. Sure, these evangelical atheists people are so concerned over might share that feature with me, but we also share opposable thumbs and a well-developed prefrontal cortex, and I'm not too worried about that. This seems like a common enough take on the word that I don't risk misunderstandings unless I'm dealing with people from highly religious subcultures who've never met an atheist in the wild. Inadvertent identity pollution might be an issue if the set of atheists was more narrowly defined, but there's no agenda attached to atheism and precious little in the way of unifying features besides the obvious. On the other hand, I'm distinctly uncomfortable calling myself a rationalist. Partly because the term has a philosophical meaning which is quite unlike that common here, but mostly because it implies adopting a subcultural identity, and that's playing with fire: ingroup biases are so pervasive, and so easy to accidentally fall into, that you should generally only do that when you have a positive reason to. Even if that weren't the case, LW is such a small group, and in many ways such a strong outlier, that dressing up a in LW-specific identity is going to carry far more baggage in the outside world than a term as broad as "atheist" would.
2Larks
At least on LW and at the meetups I've been to, I haven't seen people claiming atheist identity. I agree with your prescription, but think most people obey it already.
7Shmi
Beware of identifying in general. "We" are all quite different. Few if any of "us" can be considered reasonably rational by the standards of this site. With a sizable minority of theists here, why is this even an issue, except maybe for some heavily religious newcomers?

A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.

Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person's best interest. And it will depend on how people use it.

This is a community of intellectuals who love learning, and who aren't afraid of controversy. So for us, it wouldn't be a disaster. But I think we're a minority, and a lot of people will only see what they specifically want to see and won't learn very much on a regular basis.

2TheOtherDave
Sure, I agree. But that's true today, too. Some people choose to live in echo chambers, etc. Heck, some people are raised in echo chambers without ever choosing to live there. If people not learning very much is a bad thing, then surely the question to be asking is whether more or fewer people will end up not learning very much if we introduce a new factor into the system, right? That is, if giving me more control over what I learn makes me more likely to learn new things, it's good; if it makes me less likely, it's bad. (All else being equal, etc.) What I'm not convinced of is that increasing our control over what we can learn will result in less learning. That seems to depend on underestimating the existing chilling effect of it being difficult to learn what we want to learn.

A post from the sequences that jumps to mind is Interpersonal Entanglement:

When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.

If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically t... (read more)

4Viliam_Bur
I can imagine some good ways to control reality perception. For example, if an addicted person wants to stop smoking, it could be helpful to have a reality filter which removes all smoking-related advertising, and all related products in shop. Generally, reality-controlling spam filters could be great. Imagine a reality-AdBlock that removes all advertising from your view, anywhere. (It could replace the advertisement with a gray area, so you are aware that there was something, and you can consciously decide to look at it.) Of course that would lead to an arms race with advertisement sellers. Now here is an evil thing Google could do: If they make you wear Google glasses, they gain access to your physical body, and can collect some information. For example, how much you like what you see. Then they can experiment with small changes in your vision to increase your satisfaction. In other words, very slow wireheading, not targeting your brain, but your eyes.
1TheOtherDave
Presumably with increased control of my reality, my ability to learn new things increases, since what I know is an aspect of my reality (and rather an important one). The difficulty, if I'm understanding correctly, is not that I won't learn new things, but that I won't learn uncontrolled new things... that I'll be able to choose what I will and won't learn. The growth potential of my mind is limited, then, to what I choose for the growth potential of my mind to be. Is this optimal? Probably not. But I suspect it's an improvement over the situation most people are in right now.

I just realized I generalized too much. In Canada, you require a four-year Bachelor's of Education specifically (same as for being an engineer, and more than most trades). The average salary seems to be about the same as in the US.

Read the Sequences.

How did you find the site?

Why aren't teachers as respected as other professionals? It's too bad that the field is lower paid and less respected than other professional fields, because the quality of the teachers (probably) suffers in consequence. There's a vicious cycle: teachers aren't highly respected --> parents and others don't respect their experience -->no one wants to go into teaching and teachers aren't motivated to excel --> teachers aren't highly respected.

It's almost surprising that I had so many excellent teachers through the years. The personal connection b... (read more)

5Petruchio
As a quick answer, I would say people appreciate what they pay for, and do not care about what they may have for free. Professors are respected, and even teachers at private schools, as are professional tutors. But when teachers are used, it usually mean public school teachers, who are essentially free (taxes notwithstanding). To spread science, keep it secret extends to education and educators as well. If educators were rare, expensive keepers of knowledge, then they would be coveted. And of course, since the government is the largest employer of teachers, they are able to keep their salaries low, leading to a decrease of prestige and quality of teachers. Which leads to a vicious cycle downwards.

Really? The BBC thinks they're the second highest status profession, just after professor (and before CEO).

They're significantly better paid than you would expect given the qualifications required to be a teacher (none).

6Manfred
Here, have a summary. Until fairly recently, teaching was something you did until you got a real job, and that perception lingers. Add to that some peoples' resentment of teachers-as-authority or teachers-as-experts. Add to that the suspicious fact that male teachers are more well-respected than female teachers, but the profession is mostly women and is seen by a scary number of people as "women's work."

Upvote for meetups!

1) Job searching. It forces you to really size yourself up and compare yourself to everyone around you.

2) I really like your example. My own: feeling pressured to hang out with certain friends (because they'll feel neglected). Rather than realize that friends that guilt you into seeing them need to be see LESS, my brain just makes me feel bad.

I use a very similar formula, and it works pretty well.. One thing i also do: spend 5 minutes clearing my mind before jumping into a 25-minute Pomodoro cycle. It helps shut down the feelings triggered by an Ugh field and I find myself more concentrated.

Additionally, lazy-me LOVES having an excuse to do nothing for 5 more minutes.

I fully agree with the last paragraph. When it comes to valuing my time, the less free time I have, the more valuable it is, and the less reluctant I am to spend it (the value of an hour of my time isn't constant).

If I'm busy at a given time, it might not take much for me to go out of my way to help a friend; but if I'm really busy and they ask, there had better be some kind of incentive.

...upon reflection, that would be why people get paid time and a half for overtime.

What career paths are open to programmers? Do a lot of programmers go into management (head of a programming team), or specialize in something harder to learn? You seem to be saying that a programmer with 20 years of experience wouldn't have that much of an edge over someone with only 2 or 3 years experience.

As an engineer, two of the popular paths are going into project management or similar, or gaining a high amount of technical proficiency in certain domains. Either way, these types of positions really do require the extra experience.

1Viliam_Bur
I am aware of four paths, but maybe I am missing something. 1) Get into management. 2) Become an independent contractor, or start your own company. 3) Stay many years in one company and become an internal specialist on their software. 4) Work for a software company that sells to other software companies (e.g. Oracle), and become a specialist on their software. These paths are ordered by a number of people I know who took them, which is probably not a representative sample. 1) Getting into management brings somewhat higher salary, but also more overtime and having to deal with bullshit on daily basis. Which means that your actual salary is almost the same or even lower, but you are supposed to get a big bonus when the project is successfully finished on time. Depending on the company, you may need to talk with your customers daily, telling them a lot of buzzwords and assuring them that the things you actually have little control about will all end well and on time; also if the customer wants to yell at someone, you are the person. If your company works for government, you will have to read a lot of paperwork, and cooperate with people who would prefer if everything failed and they were just left alone. It is a great choice if you don't care about the content of your work, or if you don't really have good programming skills, but you enjoy feeling "important". On the other hand, if you are in IT because you love programming; well, that's exactly what you will not do. 2) This seems to be the best choice. It requires some skills I don't feel sure I have, such as networking and making deals with customers. 3) I could actually enjoy this, but there is a lot of risk. You have to spend many years in the same company, and there are some things that could go wrong and then a decade later you would start from zero again, because the skill is non-transferable. For example the company could get bankrupt, or someone else could get your place because of e.g. nepotism. 4) I neve

Depending on your current career path, up to a certain age it's entirely possible to switch careers entirely, even to something that makes concrete use of skills acquired from another career (so it's not a complete restart). Built-up capital or some other means of remaining self-sustainable for a period of time could allow you to return to school in something completely different.

I'm not speaking from experience though; I can guess that this type of situation would be difficult. But when saving, balancing the return on investment with the accessibility of the funds seems wise.

The ev-psych reason for the "strong leader" pattern is fitness variance in the competition between men. The leader (dominant male) would be able to impregnate a substantial proportion of the women in the tribe, while the least dominant males wouldn't reproduce at all. So males are much more competitive because the prize for winning is very high (potentially hundreds of children), while the cost of losing is very low (for women, the fitness variance is smaller because of the limit on the number of pregnancies in theire lifetimes).

So it's a priso... (read more)

2kilobug
Yes, but that explains why people (especially male) want to be strong leaders (alpha male), not why people follow strong leaders. For people to follow strong leaders, they need to have an evolutionary advantage in doing so (hope of being the next leader, the leader granting some privileges to his most faithful followers, or something else, I don't know).

The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn't even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.

To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured ... (read more)

0Stuart_Armstrong
And the risky areas are those where the transition period is very long.

I was, but hadn't delved too deeply into it until just now. There actually is a pretty good structure there that i'll look at more closely.

I've been lurking for almost a year; I'm a 25 year old mechanical engineer living in Montreal.

Like several people I've seen on the welcome thread, I already had figured out the general outline of reductionism before I found LW. A friend had been telling me about it for a while, but I only really started paying attention when I found it independently while reading up on transhumanism (I was also a transhumanist before finding it here). Reading the sequences did a few things for me:

  • It filled in the gaps in my world-model (and fleshed out my transhumanist
... (read more)
0TheOtherDave
Are you aware of the LessWrong wiki?

Absence of Evidence is directly tied to having a probabilistic model of reality. There might be an inferential gap when people refer you to it, because on its own the argument doesn't seem strong. But it's a direct consequence of Bayesian reasoning, which IS a strong argument.

(Just to clarify: I didn't mean to accuse you of ignorance, and I sympathize with having everyone spam you with links to the same material, which must be aggravating.)

-1[anonymous]
It's certainly an important point, but I think that atheists tend to overuse it. I can't begin to criticize Bayesian reasoning, especially not here.

Remember, your post has (at the time of this comment at least) a score of 4. Subjects that are "taboo" on LessWrong are taboo because people tend to discuss them badly. You asked some legitimate questions, and some people provided you with good responses.

If you're willing to consider changing your mind, the next step would be to read the sequences. A lot of what you mention is answered there, such as:

Absence of evidence is evidence of absence The Fallacy of Grey (specifically, when you mention that because we don't know the whole truth, we can... (read more)

-1[anonymous]
I've read several of the sequences, and I'm fairly familiar with this community's way of thinking. Everyone is referring me to Absence of Evidence; I think that it's a weak argument in the first place, but it also seems to be the only one a lot of people have.

Was there something in particular you were hoping to learn from them? I don't think the point of the exercise was to get an accurate profile of the female demographic on LessWrong, but to give people who wanted to speak up, a chance/incentive to do so. The submitters would probably not have posted the submissions on their own without the prompt, but they did submit these when they saw were prompted.

The anecdotes may be more useful when you consider that someone felt like she should say it. If nothing else, the contradiction in the anecdotes hints that there is no universal element among women that drives them away from LW.

One useful thing I've noticed helps a lot when discussing LW-style topics with non-rationalists (or rationalists who have not declared Crocker's Rules) is to reiterate parts of their message that you agree with. It shows that you're actually listening to what they're saying, and not being confrontational for its own sake. As in:

Non-rationalist: "I believe X, and therefore Y and Z"

Instead of "Z doesn't follow from X because ..." Respond with "I agree with X, and also Y. But Z doesn't follow because..."

Even if, by using the ... (read more)