Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Viliam 26 June 2017 01:32:19PM 2 points [-]

Too bad (or actually good) we can't actually see those superintelligent arguments. I wonder which direction they would take.

The author should perhaps describe them indirectly, i.e. not quote them (because the author is not a superintelligence, and cannot write superintelligent arguments), but describe reactions of other people after reading them. Those other people should generally become convinced about the validity of the arguments (because in-universe the arguments are superintelligent), but that can happen gradually, so in the initial phases they can be just generally impressed "hey, it actually makes more sense than I expected originally", and only after reading the whole document they would will become fully brainwashed (but perhaps not able to reproduce the argument in its full power, so they would urge the protagonist to read the original document). Random fragments of ideas can be thrown here and there, e.g. reported by people who read the superintelligent argument halfway. Perhaps the AI could quote Plato about how pure knowledge is the best knowledge (used as an excuse for why AI does not research something practical instead).

Comment author: halcyon 26 June 2017 08:17:52PM 0 points [-]

Thanks. In my imagination, the AI does some altruistic work, but spends most of its resources justifying the total expenditure. In that way, it would be similar to cults that do some charitable work, but spend most of their resources brainwashing people. But "rogue lawyer" is probably a better analogy than "cult guru" because the AI develops models of human brain types in increasingly detailed resolutions, and then searches over attractive philosophies and language patterns, allowing it to openly release its arguments. It shifts its focus to justifiability only because it discovers that, beyond a certain point, finding maximally justifiable arguments is much harder than being altruistic, and justifiability is its highest priority. But it always finds the maximally justifiable course of action first, and then takes that course of action. So it continues to be a minimal altruist throughout, making it a cult guru who is so good at its work that it doesn't need to use extreme tactics born of weakness and desperation. This is why losing the AI feels like exiting a cult, if the entire world of subjective meaning were a cult.

Comment author: halcyon 26 June 2017 07:10:14AM 1 point [-]

An idea for a failed utopia: Scientist creates an AI designed to take actions that are maximally justifiable to humans. AI behaves as a rogue lawyer spending massive resources crafting superhumanly elegant arguments justifying the expenditure. Fortunately, there is a difference between having maximal justifiability as your highest priority and protecting the off button as your highest priority. Still a close shave, but is it worth turning off what has literally become the source of all the meaning in your life?

Comment author: halcyon 26 June 2017 06:55:11AM 0 points [-]

I found an interesting paper on a Game-theoretic Model of Computation: https://arxiv.org/abs/1702.05073

I can't think of any practical applications yet.

Comment author: halcyon 24 June 2016 09:42:52PM 2 points [-]

I don't want to live forever myself, but I want people who want to live forever to live forever. Does that make me a transhumanist?

Comment author: Lumifer 23 June 2016 05:53:25PM *  0 points [-]

Not being stupid is an admirable goal, but it's not well-defined.

It's not a goal. It is a criterion you should apply to the steps which you intend to take. I admit to it not being well-defined :-)

Is there a standard term for the error you are referring to?

In statistics that used to be called "data mining" and was a bad thing. Data science repurposed the term and it's now a good thing :-/ Andrew Gelman calls a similar phenomenon "garden of the forking paths" (see e.g. here).

Basically the problem is paying attention to noise.

Can't I have my common sense, but make all possible comparisons anyway

You can. It's just that you shouldn't attach undue importance to which comparison came the first and which the second. You're generating estimates and at the very minimum you should also be generating what you think are the errors of your estimates -- these should be helpful in establishing how meaningful your ranking of all the pairs is.

And you still need to define a goal. For example, a goal of explanation/understanding is different from the goal of forecasting.

I'm not telling you to ignore the data. I'm telling you to be sceptical of what the data is telling you.

Comment author: halcyon 23 June 2016 08:42:21PM *  0 points [-]

Thank you! Those data mining algorithms are exactly what I was looking for.

(Personally, I would describe the situation you are warning me against as reducing it "more than is possible" rather than "as much as possible". I am definitely in favor of using common sense.)

Comment author: Lumifer 23 June 2016 04:27:27PM 1 point [-]

the whole idea was to minimize that as much as possible

I believe this idea to be misguided. The point of the process is to understand. You can't understand without "interpretation" -- looking for just the biggest numbers inevitably leads you astray.

The issue isn't what you can rationalize -- "don't be stupid" is still the baseline, level zero criterion.

What conditions must my goal satisfy in order to qualify as a "well-defined goal"?

A specification of what kind of answers will be acceptable and what kind will not.

Have I made any actual (meaning technical) mistakes so far?

Are you asking whether your spaghetti factory mixes flour and water in the right ratio?

Comment author: halcyon 23 June 2016 04:48:49PM -1 points [-]

Not being stupid is an admirable goal, but it's not well-defined. I tried Googling "spaghetti factory analysis" and "spaghetti factory analysis statistics" for more information, but it's not turning up anything. Is there a standard term for the error you are referring to?

Can't I have my common sense, but make all possible comparisons anyway just to inform my common sense as to the general directions in which the winds of evidence are blowing?

I don't see how informing myself of correlations harms my common sense in any way, and the only alternative I can think of is to stick to my prejudices, but whenever some doubt arises as to which of my prejudices has a stronger claim, I should thoroughly investigate real world data to settle the dispute between the two. As soon as that process is over, I should stop immediately because nothing else matters.

Is that the course of action you recommend?

Comment author: Lumifer 23 June 2016 03:19:04PM 1 point [-]

I'm trying to get at least a vague handle on what I can legitimately infer

That's not a very well-defined goal. You are engaging in what's known as a spaghetti factory analysis: make a lot of spaghetti, throw it on the wall, pick the most interesting shapes. This doesn't tell you anything about the world.

Sure, you can start with correlations. But that's only a start. Let's say you've got a high correlation between A and B. The next questions should be: Does it make sense? Is there a plausible mechanism underlying this correlation? Is it stable in time? Is it meaningful? And that's before diving into causality which correlations won't help you much with.

You still need a better goal of the analysis.

Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields?

Nooooo! You don't understand basic stats, trying to (mis)use complicated tools will just let you confuse yourself more thoroughly.

Comment author: halcyon 23 June 2016 03:31:33PM *  0 points [-]

Sure, I can always offer my own interpretations, but the whole idea was to minimize that as much as possible. I can rationalize anything. Watch: Milk consumption is negatively correlated with income inequality. Drinking less milk leads to stunted intelligence, resulting in a rise in income inequality. Or income inequality leads to a drop in milk consumption among poor families. Or the alien warlord Thon-Gul hates milk and equal incomes.

What conditions must my goal satisfy in order to qualify as a "well-defined goal"? Have I made any actual (meaning technical) mistakes so far? (Anyway, thanks for reminding me to check for temporal stability. I should write a script to scrape the data off pdfs. (Never mind, I found a library.))

Comment author: Lumifer 23 June 2016 02:34:53PM 0 points [-]

What is it that you want to do?

Just looking at correlations and nothing else can lead to funny results.

Comment author: halcyon 23 June 2016 02:43:15PM *  -1 points [-]

I'm trying to get at least a vague handle on what I can legitimately infer from what using data that might, and probably does, contain circular causation. I'm looking for statistical tools that might help me do that. Should I try Bayesian causal inference anyway, just to see what I get? Support vector machines? Markov random fields? Does the Spurious Correlations book have ideas on that? (No, it just seems to be an awesome set of correlations. Thanks, BTW.)

(Also notice that these are not just any correlations. These are the strongest correlations that pertain among a large number of variables relative to each other. I mean, I computed all possible correlations among every combination of 2 variables in hopes that the strongest I find for each variable might show something interesting.)

Comment author: halcyon 23 June 2016 01:58:56PM *  -1 points [-]

I collected some social statistics from the internet and computed their correlations: https://drive.google.com/open?id=0B9wG-PC9QbVERHdiTi1uTlFMMlU My sources were: http://pastebin.com/ERk1BaBu

But I'm not sure how to proceed from there: https://drive.google.com/open?id=0B9wG-PC9QbVEWlRZSG9KM0ZFeVk ?? Dotted lines represent positive correlations and arrowed lines negative correlations.

I obtained that confusing chart by following this questionable method: https://drive.google.com/open?id=0B9wG-PC9QbVEVHg1T1lQNE1ZTk0 First, drop some of the trivial correlations like the ones among the different measures of national wealth, and weaker correlations between +.5 and -.5. For each variable, select the correlation furthest from 0 and throw it into the chart. I also tried keeping only one measure of national wealth in the model in hopes of less confusion: https://drive.google.com/open?id=0B9wG-PC9QbVEZlExWmhoOWRjVk0

I'm looking for help in analyzing this data. Are there any methods you would recommend? Which variables should I drop for better results? I tried keeping only proportions at one point. (Bayesian causal inference assumes the nonexistence of circular causation AIUI, a condition I can't guarantee with this data, to say the least.)

(Fixed the links. Sorry about that.)

Comment author: ChristianKl 05 May 2016 06:27:46PM -1 points [-]

Hitler had a huge party of supporters behind him that he spend a decade gathering around him. Trump on the other hand is much more of an one-man show. One of the biggest role of the president is making personal choices and there simply no comparable pool of talent. Under a Trump administration someone like Chris Christie who's a long-term friend of the Trump family is likely going to get a post in his administration.

When it comes to totalitarism it's a mistake to assume that the past will repeat exactly the same way. It's hard to believe a US government would simply torture random people with Arabic names intentionally just because they have Arab names. It's more likely that privacy will get completely eroded. Today we have face recognition that's strong enough to hook up all camera's on streets to it and get general movement profiles. Forbidding encryption would also be on the table.

Comment author: halcyon 05 May 2016 09:02:35PM 0 points [-]

Thanks, I'm basically ignorant about contemporary American politics. (But I've read Tocqueville. This is probably not a desirable state of affairs.)

View more: Next