Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: LawrenceC 23 July 2017 06:20:40AM 6 points [-]

I think the term "Dark Arts" is used by many in the community to refer to generic, truth-agnostic ways of getting people to change their mind. I agree that Scott Adams demonstrates mastery of persuasion techniques, and that this is indeed not necessarily evidence that he is not a "rationalist".

However, the specific claim made by James_Miller is that it is a "model rationalist disagreement". I think that since Adams used the persuasion techniques that Stabilizer mentioned above, it's pretty clear that it isn't a model rationalist disagreement.

Comment author: DanArmak 23 July 2017 11:21:41AM 1 point [-]

I agree, and I didn't mean to imply otherwise.

Comment author: Lumifer 22 July 2017 03:20:10AM 1 point [-]

Basically, for objectivists (with respect to morals), having some other morality is wrong. For relativists it's merely different. The former is a much stronger cause for intervention than the latter.

Also, willingness to insist on your morality is generally a sign of taking it seriously.

Comment author: DanArmak 22 July 2017 03:17:14PM 0 points [-]

The correlation between moral objectivism and interventionism is probably true, but I think it's historically contingent, and not a logical consequence of objectivism. Whether or not I think of my morality as objective (universal) or subjective (a property of myself), that's orthogonal to what I actually think is moral.

I'm a moral relativist. My morality is that torture and murder are wrong and I am justified and, indeed, sometimes enjoined to use force to stop them. I don't think this is an uncommon stand.

Other people are moral objectivists, but their actual morals may tell them to leave others alone except in self-defense.

Comment author: DanArmak 22 July 2017 02:51:41PM *  7 points [-]

I haven't listened to the debate (I'd read it if it was transcribed), but I want to object to a part of your post on a meta level, namely the part where you say:

To me, he is very far from a model for a rationalist

Being able to effectively convince people, to reliably influence their behavior, is perhaps the biggest general-purpose power a human can have. Don't dismiss an effective arguer as "not rationalist". On the contrary, acknowledge them as a scary rationalist more powerful than you are.

The word "rationalist" means something fairly narrow. We shouldn't make it into an applause light, a near synonym of "people we like and admire and are allied with". Being reliably effective, on the other hand, is a near synonym of being rational(ist).

If Adams employed "dark arts" in his debate, the only thing that necessarily means is that he wasn't engaged in an honest effort to discover the truth. But that's not news - it was a public debate staged in order to convince the audience! So Adams used a time-honored technique of achieving this goal - how very rational of him. At least, it's rational if he succeeded, and I assume you think he did succeed in convincing some of the audience, otherwise you wouldn't bother to post a denunciation.

Similarly, the name "Dark Arts" is misleading. They are (if I may channel Professor Quirrell for a moment) extremely powerful Arts everyone should cultivate if they can, and use where appropriate: not when honestly conversing with a fellow rationalist to discover the truth, but when aiming to convince people who are not themselves trained in rationality, and who (in your estimation) will not come by their beliefs rationally, whether or not they end up believing the truth.

This is a near cousin of politics (in the social sense, not the government sense). Politics is a mind-killer and it's important to keep politics-free spaces for various purposes including the pursuit of truth. But we should not say "rationalists should not engage in politics", any more than "rationalists should never try to convince non-rationalists of anything".

ETA: I'm not claiming Adam is a rationalist or is good at being a rationalist; I'm not familiar enough with him to tell. I'm only claiming that the fact he is or tries to be a good persuader in a debate and uses Dark Arts isn't evidence that he isn't.

Comment author: DanArmak 22 July 2017 02:16:27PM *  1 point [-]

I feel the briefness of history is inseparable from its speed of change. Once the agricultural revolution got started, technology kept progressing and we got where we are today quite quickly - and that's despite several continental-scale collapses of civilization. So It's not very surprising that we are now contemplating various X-risks: to an external observer, humanity is a very brief phenomenon going back, and so it's likely to be brief going forward as well. Understanding this on an intuitive level helps when thinking about the Fermi paradox or the Doomsday Argument.

Comment author: ChristianKl 20 May 2017 06:55:38PM *  0 points [-]

People skills have great value for programmers

Yes, but we didn't disagree on the value of people skills but on the other value social interaction outside of work. You are mostly convincing your coworkers while you are at work and not a social hangouts.

Convincing the rest of the world to adopt programming technique X is more likely done via the internet then through social hangouts.

Comment author: DanArmak 20 May 2017 07:22:48PM *  0 points [-]

I think you're mostly right about that, but not entirely. The two realms are not so clearly separated. There are social hangouts on the Internet. There are social hangouts, of both kinds, where people talk shop. There are programming blogs and forums where social communities emerge. And social capital and professional reputation feed into one another.

Comment author: DanArmak 20 May 2017 04:35:16PM *  2 points [-]

So that's the real role of the expert here

I work in the data science industry - as a programmer, not a data scientist or statistician. From my general understanding of the field what you're describing is a broadly accepted assumption. But I might be misled by the fact that the company I work for bases its product on this assumption, so I'm not sure if you're just describing this thing from another angle or if there's a different point that I'm missing here or if, in fact, many people spend too much effort trying to hand-tune models.

The data scientists I work with make predictive models in two stages. The first one is to invent (or choose) "features", which include natural variables from the original dataset, functions acting on one or more variables, or supplementary datasets that they think are relevant. The data scientist applies their understanding of statistics as well as domain knowledge to tell the computer which things to look for and which are clearly false positives to be ignored. And the second stage is to build the actual models using mostly standard algorithms like Random Forest or XGBoost or whatnot, where the data scientist might tweak arguments but the underlying algorithm is generally given and doesn't allow for as much user choice.

A common toy example is the Titanic dataset. This is a list of passengers on the Titanic, with variables like age, name, ticket class, etc.. The task is to build a model that predicts which ones survived when the ship sank. A data scientist would mostly work on feature engineering, e.g. introducing a variable that deduces passenger's sex from their name, and focus less on model tuning, e.g. determining the exact weight that should be given to the feature in the model (women and children had much higher rates of survival).

In a more serious example, a data scientist might work on figuring out which generic datasets are relevant at all. Suppose you're trying to predict where to best open a new Starbucks branch. Should you look at the locations of competing coffee shops? Noise from nearby construction? Public transit stops or parking lots? Nearby tourist attractions or campuses or who knows what else? You can't really afford to look at everything, it would both take too long (and maybe cost too much) and risk false positives. A good domain expert is the one who generates the best hypotheses. But to actually test those hypotheses, you use standard algorithms to build predictive models, and if a simple linear model works, that's a good thing - it shows your chosen features were really powerful predictors.

Comment author: ChristianKl 19 May 2017 07:35:38PM 0 points [-]

Even computer programmers who spent the majority of their working output working alone can benefit a lot from having good connections when it comes to finding good jobs.

Finding jobs isn't the only thing were social connection helps. If you have an health issue than it can help a lot if you have a friend who knows a good doctor. If the friend has a personal relationship to the doctor it might mean that you get an immediate appointment instead of having to wait weeks.

I personally don't do social events like board game nights that are basically superficial fun but prefer events with provide additional value, but I think it's a mistake to see social events generally as low value.

Comment author: DanArmak 20 May 2017 04:04:01PM *  1 point [-]

Even computer programmers who spent the majority of their working output working alone can benefit a lot from having good connections when it comes to finding good jobs.

People skills have great value for programmers, and finding jobs is a very small part of it. I write this from personal experience.

Programmers are still people. The amount of great software any one person can write in their lifetime is very limited. Teaching or convincing others (from coworkers to the rest of the world) to agree with you on what makes software great, to write great software themselves, and to use it, are the greatest force multipliers any programmer can have, just like in most other fields.

Sometimes there are exceptions; one may invent a new algorithm or write some new software that everyone agrees is great. But most of the time you have to convince people - not just less-technical managers making purchasing decisions, but other programmers who don't think that global mutable state is a problem, really, it worked fine in my grandfather's time and it's good enough for me.

Comment author: juliawise 22 April 2017 01:27:27AM 3 points [-]

Yeah, I remember around 2007 a friend saying her parents weren't sure whether it was right for them to have children circa 1983, because they thought nuclear war was very likely to destroy the world soon. I thought that was so weird and had never heard of anyone having that viewpoint before, and definitely considered myself living in a time when we no longer had to worry about apocalypse.

Comment author: DanArmak 20 May 2017 03:10:56PM 0 points [-]

I don't understand that viewpoint for a different reason. Suppose you believe the world will be destroyed soon. Why is that a reason not to have children? Is it worse for the children to live short but presumably good lives than not to live at all?

Comment author: username2 12 February 2017 05:09:08AM 1 point [-]

What's materially different about a God-based religion and the science-centered rationality cult? Other than our miracles actually being real, that is.

I almost said "verifiably real", but therein lies the crux of the issue. A religion is basically a foundational system of beliefs, and a framework for constructing new beliefs. That includes even how you verify the truthfulness of statements. Blanket calling religion of all sorts 'stupidly' is oversimplifying the situation, to say the least.

Comment author: DanArmak 12 February 2017 05:52:20PM 1 point [-]

The post doesn't say that all religion is stupidity. It says that one of the things we cal stupidity is subconscious conditioning, and one of the common case of such conditioning is religion. A subset of religion and a subset of stupidity, intersecting. Do you think that's wrong?

Comment author: turchin 11 February 2017 06:19:57PM 0 points [-]

If our universe is test simulation, it is a digital experiment to test something, and if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.

So Omega is not interested in humans in this simulation. It is interested in behaviour of Beta to humans.

If there will be no human suffering, it will be clear that it is a simulation, and it will be not pure test. Alpha must hide its existence and only hint on it.

Comment author: DanArmak 11 February 2017 08:37:04PM 1 point [-]

Why do you assume any of this?

If our universe is test simulation, it is a digital experiment to test something,

That's a tautology. But if you meant "if our universe is a simulation" then why do you think it must be a a test simulation in particular? As opposed to a research simulation to see what happens, or a simulation to make qualia because the simulated beings's lives have value to the simulators, or a simulation for entertainment value, or anything else.

if it include AI, it is probably designed to test AI behaviour by putting it in complex moral dilemmas.

Maybe the desired outcome from the simulators' point of view is to develop a paperclipping AI that isn't swayed by human moral arguments. Maybe the simulation is really about the humans, and AIs are just inevitable byproducts of high-tech humans. There are lots of maybes. Do you have any evidence for this, conditional on being a simulation?

View more: Next