Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, August 21 - August 27, 2017

1 Post author: Thomas 21 August 2017 06:13AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (70)

Comment author: cousin_it 23 August 2017 01:34:22PM *  7 points [-]

Here's an old puzzle:

Alice: How can we formalize the idea of "surprise"?

Bob: I think surprise is seeing an event of low probability.

Alice: This morning I saw a car whose license plate said 3817, and that didn't surprise me at all!

Bob: Huh.

For everyone still wondering about that, here's the correct answer! The numerical measure of surprise is information gain (Kullback-Leibler divergence) from your prior to your posterior over models after updating on the data. That gives the intuitive answer to the above puzzle, as long as none of your models assigned high probability to 3817 in advance. It also works for the opposite case, if you expected an ordered string but got a random one, or ordered in a different way.

This is actually well known, I just wanted to put it on LW.

Comment author: arundelo 23 August 2017 05:04:49PM 1 point [-]

Just to make sure I understand prior and posterior over models, is the following about right?

  • Alice starts with a prior of 0.999 that non-vanity plates are generated basically randomly (according to some rule of "N letters followed by M digits" or whatever, and with rules e.g. preventing swear words).
  • Alice sees "3817" (having seen many other 4-digit plates previously).
  • Alice's posterior probability over models is still about 0.999 on the same model.
Comment author: cousin_it 23 August 2017 09:34:08PM 1 point [-]

Yeah.

Comment author: Dagon 24 August 2017 09:25:05PM 0 points [-]

Wait. If you're talking about surprise because you have said "update your model based on how surprised you are", you can't turn around and say "surprise is defined by how much you should update your model". "update your model based on how much you should update your model" isn't very helpful.

Comment author: arundelo 25 August 2017 06:31:36AM 0 points [-]

The intuitive sense of what surprise is corresponds well to the rules for updating your probability distribution over models, which we can therefore take as a formal definition of surprise.

Comment author: IlyaShpitser 23 August 2017 04:17:24PM 0 points [-]

How is a Frequentist surprised?

Comment author: cousin_it 23 August 2017 04:33:38PM *  0 points [-]

I'm missing a lot of knowledge to answer that. Can you?

Comment author: IlyaShpitser 23 August 2017 05:41:44PM *  0 points [-]

Presumably, F folks talk about how "surprised" an element of a statistical model is, relative to observed data (maximum likelihood as minimizing surprise in KL sense). That's about all I can think of.

Comment author: gwern 25 August 2017 02:40:29AM 4 points [-]

Grognor has reportedly died: https://twitter.com/MakerOfDecision/status/898625422270889984

Sad. He didn't like me, but I mostly liked him.

Comment author: Vaniver 27 August 2017 10:41:05PM *  1 point [-]

Boo death.

I maintain the list: Less Wrong on Twitter. No, really, I actually maintain it. I request that you don't turn it into a wiki article unless I die.

Comment author: halcyon 24 August 2017 03:33:27PM 4 points [-]

A better explanation of the Monty Hall problem:

A game show host always plays the following game: First he shows you 3 doors and informs you there is a prize behind one of them. After allowing you to select one of the doors, he throws open one of the other doors, showing you that it's empty. He then offers you a deal: Stick to your original guess, or switch to the remaining door?

What is the most important piece of information in this problem statement? I claim that the bit that ought to shock you is that the host plays this game all the time, and the door he throws open ALWAYS turns out to be empty. Think about it: If the host randomly throws open a door, then in every third show, the door he opens would have the prize behind it. That would ruin the game!

The host knows which door has the prize, and in order not to lose the interest of the spectators, he deliberately opens an empty door every time. What this means is that the door you chose was selected randomly, but the door that the host DIDN'T choose is selected on the basis of a predictable algorithm. Namely, having the prize behind it.

This is the real reason why you would do better if you switched your guess to the remaining door.

What do you think? Is that clearer than the usual explanations?

Comment author: Oscar_Cunningham 24 August 2017 05:09:46PM 2 points [-]

Yeah, I think it's better. It highlights the flow of knowledge: where the prize is -> host's knowledge -> which door he opens -> player's knowledge.

I'd maybe change the phrase "predictable algorithm", since the host's actions aren't predictable to the player. Maybe

but the door that the host DIDN'T choose is selected on the basis of a predictable algorithm. Namely, having the prize behind it.

could be replaced by

but the door that the host DIDN'T choose is selected on the basis of the host's knowledge of where the prize is. His choice can therefore give you information about where the prize might be: namely, it's more likely to be the door he avoided.

or something similar?

Comment author: halcyon 24 August 2017 05:46:54PM *  1 point [-]

Thanks. You're right, that part should be expanded. How about:

At this point, you have two choices: Either 1. one randomly selected door, or 2. one door among two doors, chosen by the host on the basis of the other not having the prize.

You would have better luck with option 2 because choosing that door is as good as opening two randomly selected doors. That is twice as good as opening one randomly selected door as in option 1.

Comment author: Oscar_Cunningham 25 August 2017 08:58:36AM 0 points [-]

Yeah, I like that.

Comment author: MaryCh 24 August 2017 06:49:45AM 1 point [-]
Comment author: g_pepper 24 August 2017 02:19:45PM 1 point [-]

Well, I guess I won't be complaining about my neighbor's lawn flamingos any more after reading that!

Comment author: MaryCh 24 August 2017 03:47:00PM 0 points [-]

Huh. We have lawn storks here. Or, rather, roof storks. Don't know what they are made from, but possibly metal, from the look of those necks.

Comment author: ChristianKl 24 August 2017 12:32:20PM 0 points [-]

Given that the linked article isn't in English, what is it about?

Comment author: gjm 24 August 2017 01:08:26PM 1 point [-]

A house near Minsk, just like MaryCh's link text says. Here, have Google Translate: https://translate.google.co.uk/translate?hl=en&sl=ru&tl=en&u=https%3A%2F%2Frealty.tut.by%2Fnews%2Fofftop-realty%2F557027.html

Comment author: Thomas 21 August 2017 06:16:29AM 1 point [-]
Comment author: Oscar_Cunningham 21 August 2017 07:33:29AM 0 points [-]

I suspect the most difficult bit of the problem is defining what we mean by "the length of Antartica's shore". Crinkles below a certain size are irrelevant because water can't flow over them. So we mean the length of the shore as measured by a ruler whose length is the Capillary length of water in air, which is 2.7 mm. Of course no one has ever measured this, but perhaps we can estimate it by using coarser measurements and fitting a curve to them.

Comment author: Thomas 21 August 2017 07:40:46AM 0 points [-]

Yes, this is the trickiest part. According to some French jokes, Slovenia has 42 kilometers of coast. I agree. This is still not the funny part of those jokes, this is the factual part.

Several thousand kilometers, maybe 10 thousand kilometers of Antarctica's coast by the same methodology.

Comment author: Oscar_Cunningham 21 August 2017 11:03:35AM 2 points [-]

According to this amazing paper, Antarctica has a coastline of 39849 km when measured at the 100 m scale, and 43449 km when measured at the 25 m scale. They say its fractal dimension is 1.096448. Fitting a curve of the form L = M*r^(1-1.096448) to those two points I get that for r = 2.7 mm we get L = 107349 km. This methodology is perhaps nonoptimal, but I think it's the best we've got.

So for the purposes of this question I'll take the perimeter of Antarctica to be 100 000 km. Wikipedia says the total area of the ocean is 360 000 000 km^2. So to rise 6 m needs a volume of 2.16 * 10^15 m^3. A century is 3.16 * 10^9 s, so we need 6.84 * 10^5 m^3s^-1. The Amazon averages 2.09 * 10^5 m^3s^-1, so we need about three of them. If the coast of the Antarctic is 10^8 m then we need 6.84 litres flowing over each meter every second.

Comment author: Thomas 21 August 2017 04:52:27PM *  0 points [-]

I'll take the perimeter of Antarctica to be 100 000 km.

The equator is 40 000 km long. Antarctica can't be 2.5 times longer. The Polar circle is what - about 8000 km long.

The beaches of Antarctica must be shorter than that.

EDIT: Or at most twice as long.

Comment author: Oscar_Cunningham 21 August 2017 05:38:07PM 0 points [-]

You're wrong here. See the coastline paradox. Lines can be as long as they want, just by being extremely crinkly. There's no law that says a shorter curve cannot enclose a longer one.

Comment author: Thomas 21 August 2017 06:08:18PM 0 points [-]

I am right here. Those small bays are not important in this case when we want to calculate the amount of water pouring out to sea. The mouth of the river Amazone is 200 km wide. Not as wide as the sum of all underwater bays and peninsulas.

Comment author: Oscar_Cunningham 21 August 2017 08:15:17PM 0 points [-]

Okay. So when I was calculating how many Amazons were needed the perimeter didn't matter, and the answer was just 3. But when you asked how many litres would be pouring over each meter of perimeter I did the calculation based on the idea that an equal amount of water was passing over each bit of the perimeter.

Otherwise the answer is of course that the water forms together into rivers so that most of the perimeter has no water passing over it but the mouths of the rivers have a great deal of water passing over them.

Comment author: Thomas 21 August 2017 12:38:32PM 0 points [-]

Three Amazons are the right answer. AFAIK, the biggest river there is approximately as large, as the biggest river on the island of Crete. Which may be beautiful, but quite lousy in cubic meters per second.

Where and how some people see three Amazons on Antarctica, is a mystery to me. The amount of ice falling directly into the sea, is quite pathetic, as well.

But mostly, I love how the arithmetic is reigning supreme above all the sciences.

Comment author: Unnamed 22 August 2017 12:58:02AM 0 points [-]

Wikipedia is another nice source of info. It claims that, during the past 20,000 years, the fastest increase in sea level was around 5 meters per century.

(The page on sea level rise mentions 3 meltwater pulses; clicking through it looks like Meltwater Pulse 1A is the one that researchers are the most confident about.)

Comment author: Thomas 22 August 2017 07:18:11AM 0 points [-]

This increase has some geological traces in the state of Washington. That was the North American glacier melting, for the most part. We don't see much of that kind of flooding on Greenland or Antarctica recently. This is a real thing.

I am certain, that if your arithmetic isn't sound, then your science is most likely bogus, no matter how fancy it looks.

Comment author: g_pepper 21 August 2017 03:32:46PM 0 points [-]

But mostly, I love how the arithmetic is reigning supreme above all the sciences.

This was a good puzzle, but I don't see how it follows from the puzzle that arithmetic is "reigning supreme" above all the sciences. For one thing, I thought that most scientific estimates of sea level rise over the next 100 years were a lot lower than 6 meters. Do you have any links to projections of 6 meters?

Comment author: Thomas 21 August 2017 03:44:03PM 0 points [-]

Sure, Inconvenient Truth of Al Gore. He hasn't returned his Nobel prize, so this still stands.

Comment author: g_pepper 21 August 2017 04:10:21PM 0 points [-]

OK, noted, and thanks. I haven't actually read An Inconvenient Truth.

But, I think most current scientific estimates are lower, so "reigning supreme above all the sciences" still seems a bit hyperbolic.

Comment author: Thomas 21 August 2017 04:17:34PM 0 points [-]

Okay, well. The next time I'll ask, how fast the world ocean is losing water. But that's for the next time. We had to eliminate this fast-rising possibility first.

Comment author: Oscar_Cunningham 21 August 2017 03:54:10PM 0 points [-]

Everyone knows Peace prizes don't count.

Comment author: Thomas 21 August 2017 04:21:26PM 0 points [-]

Everyone knows Academy Awards do count. He has an Oscar, too.

Comment author: Manfred 21 August 2017 02:05:24PM 0 points [-]

Where and how some people see three Amazons on Antarctica, is a mystery to me. The amount of ice falling directly into the sea, is quite pathetic, as well.

The amazon begins distributed across brazil, as the occasional drops of rain. Then it comes together because of the shape and material of the landscape, and flows into streams, which join into rivers, which feed one big river. If global warming is causing antarctica to lose mass, do you expect the same thing to happen in antarctica, with meltwater beginning distributed across the surface, and then collecting into rivers and streams?

Comment author: Thomas 21 August 2017 02:10:35PM 0 points [-]

Yes. How else could it be?

Comment author: Manfred 21 August 2017 02:41:32PM 0 points [-]

How about glacial flow? Ice doesn't move fast, but it does move. It can postpone melting until it's in contact with seawater. What do you think the ratio of mass moved by rivers vs. glaciers is in Antarctica?

Comment author: Thomas 22 August 2017 08:24:17PM 0 points [-]

How about glacial flow?

A solid state river, promptly melting in the icy, ice covered ocean, is even less plausible than a large watery river. Don't you think so?

Comment author: Thomas 21 August 2017 03:24:32PM *  0 points [-]

. Using an optimized flux gate, ice discharge from Antarctica is 1932 ± 38 Gigatons per year (Gt yr-1) in 2015, an increase of 35 ± 15 Gt yr-1 from the time of the radar mapping.

That's about 0.4 Amazon.

The precipitations alone compensate most of this. Almost 3 Amazons still missing for the 6 meters sea rise in a century,

Besides ...

Icebergs generally range from 1 to 75 metres (3.3 to 246.1 ft) above sea level and weigh 100,000 to 200,000 metric tons

10 million icebergs per year? Per a few summer months? Highly unrealistic.

Comment author: Manfred 21 August 2017 04:28:22PM 0 points [-]

Neat!

Glaciers don't have to form icebergs in order to melt. It can just melt where it meets the sea.

Almost 3 Amazons still missing for the 6 meters sea rise in a century

You know, now that you mention it, 6 meters sure is a lot. Where did you get that number from? See p. 1181 for IPCC projections.

Comment author: Oscar_Cunningham 21 August 2017 03:45:24PM 0 points [-]

Presumably there is some temperature that would cause that much sea level rise in that much time. In which case that water would leave Antarctica in one way or another.

Comment author: MaryCh 27 August 2017 04:10:35PM *  0 points [-]

What, to you, is the difference between a hardcore popular science book and one of the serious science publicistics? It seems to me that it must be great, and I miss the former kind, and I can't be alone in this, but it's the latter kind that gets published, weakly supported by the distributors and occasionally, sold.

By 'gets published' I mean here in Ukraine, although it might be true for other countries.

Comment author: halcyon 25 August 2017 02:29:13PM 0 points [-]

In the Less Wrong Sequences, Eliezer Yudkowsky argues against epiphenomenalism on the following basis: He says that in epiphenomenalism, the experience of seeing the color red fails to be a causal factor in our behavior that is consistent with us having seen the color red. However, it occurs to me that there could be an alternative explanation for that outcome. It could be that the human cognitive architecture is set up in such a way that light in the wavelength range we are culturally trained to recognize as red causes both the experience of seeing the color as well as actions consistent with seeing it. After the research which shows that we decide to act before becoming conscious of our decision, such a setup would not be a surprise to me if true.

Comment author: torekp 26 August 2017 10:21:54AM 0 points [-]

The point is literally semantic. "Experience" refers to (to put it crudely) the things that generally cause us to say "experience", because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). "Horse" means horse because horses typically occasion the use of "horse". If there were a language in which cows typically occasioned the word "horse", in that language "horse" would mean cow.

Comment author: halcyon 26 August 2017 11:59:01PM *  0 points [-]

I don't think epiphenomenalists are using words like "experience" in accordance with your definition. I'm no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions.

EY's point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 -> 2 -> 3, are other causal diagrams not possible, such as 1 -> 2 and 1 -> 3, mediated by the human cognitive architecture? (Or maybe even 1 -> 3 -> 2 in some cases, where we perceive something on the basis of having acted in certain ways.)

However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn't explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that's not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?

Comment author: torekp 27 August 2017 04:58:54PM 0 points [-]

We not only stop at red lights, we make statements like S1: "subjectively, red is closer to violet than it is to green." We have cognitive access both to "objective" phenomena like the family of wavelengths coming from the traffic light, and also to "subjective" phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.

By the way, the fact that there is a large equivalence class of wavelength combinations that will be perceived the same way, does not make redness inherently subjective. There is an objective difference between a beam of light containing a photon mix that belongs to that class, and one that doesn't. The "primary-secondary quality" distinction, as usually conceived, is misleading at best. See the Ugly Duckling theorem.

Back to "subjective" qualities: when I say subjective-red is more similar to violet than to green, to what does "subjective-red" refer? On the usual theories of how words in general refer -- see above on "horses" and cows -- it must refer to the things that cause people to say S2: "subjectively this looks red when I wear these glasses" and the like.

Suppose the epiphenomenalist is a physicalist. He believes that subjective-red is brain activity A. But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B. But now by our theory of reference, subjective-red is B, rather than A. If the epiphenomenalist is a dualist, a similar problem applies.

Comment author: halcyon 29 August 2017 08:13:11PM 0 points [-]

I don't see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps "physical" and "perceived" qualities would be more specific than "primary" and "secondary" qualities.

Regarding your question, if the "1->2 and 1->3" theory is accurate, then I suppose when we say that "red is more like violet than green", certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C.

So it follows that "But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B." is true, but "But now by our theory of reference, subjective-red is B, rather than A." is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2.

Regarding S2, I think there is an equivocation between different definitions of the word "subjective". This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of "subjective experience" is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using.

So in the end, it still comes down to a mereological fallacy, but not the ones that non-physicalists would prefer we end up with. Does that make sense?

This is an interesting example, actually. Do we have data on how universal perceptions of color similarities, etc. are? We find entire civilizations using some strange analogies in the historical record. For example, in the last century, the Chinese felt they were more akin to Russia than the West because the Russians were a land empire, whereas Westerners came via the sea like the barbaric Japanese who had started the Imjin war. Westerners had employed similar strong arm tactics to the Japanese, forcing China to buy opium and so on. Personally, I find it strange to base an entire theory of cultural kinship on the question of whether one comes by land or sea, but maybe that's just me.

Comment author: torekp 30 August 2017 10:58:49PM 0 points [-]

The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You've called A "color perception A", but I aim to dispute that.)

Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word "gumie" always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greater geckos are infested with microscopic gyrating gnats. Gyrating gnats make intense ultrasound energy, so whenever anyone is close to a greater gecko, their environment and even their brain is filled with ultrasound. When one's brain is filled with this ultrasound, the oxygen consumption by brain cells rises. Greenforesters are hunter-gatherers lacking either microscopes or ultrasound detectors.

To what does "gumie" refer: geckos, ultrasound, or neural oxygen consumption? It's a no-brainer. Greenforesters can't talk about ultrasound or neural oxygen: those things play no causal role in their talk. Even though ultrasound and neural oxygen are both inside the speakers, and in that sense affect them, since neither one affects their talk, that's not what the talk is about.

Mapping this causal structure to the epiphenomenalist story above: geckos are like photon-wavelengths R, ultrasound in brain is like brain activity B, oxygen consumption is like "color perception" A, and utterances of "gumie" are like utterances S1 and S2. Only now I hope you can see why I put scare quotes around "color perception". Because color perception is something we can talk about.

Comment author: halcyon 31 August 2017 01:26:25PM *  0 points [-]

I'm not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don't have direct causal access to for reasons that are similar to the reasons that we talk about free will.

Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I'm hoping it will contradict one of your preconceptions.)

Comment author: torekp 02 September 2017 06:59:28PM 0 points [-]

I don't say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C ... causes T, where T is the talk; the talk can still be about A. It just can't be about Z, where Z is something which never appears in any chain leading to T.

I just now added the caveat "basic" because you have a good point about free will. (I assume you mean contracausal "free will". I think calling that "free will" is a misnomer, but that's off topic.) Using the basic concepts "cause", "me", "action", and "thing" and combining these with logical connectives, someone can say "I caused my action and nothing caused me to cause my action" and they can label this complex concept "free will". And that may have no referent, so such "free will" never causes anything. But the basic words that were used to define that term, do have referents, and do cause the basic words to be spoken. Similarly with "unicorn", which is shorthand for (roughly) a "single horned horse-like animal".

An eliminativist could hold that mental terms like "qualia" are referentless complex concepts, but an epiphenomenalist can't.

Comment author: whpearson 21 August 2017 07:50:01PM 0 points [-]

Is there any appetite for trying to create a collective fox view of the future?

Model the world under various assumptions (energy consumption predictions + economic growth + limits to the earths energy dissipation + intelligence growth etc) and try and wrangle it into models that are combined together and updated collectively?

Comment author: Tenoke 21 August 2017 12:03:14PM 0 points [-]

After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.

I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in AI Safety discussions, potentially improving the quality of the discourse a little.

The effect of this might be slightly larger if more people do this.

Comment author: WalterL 22 August 2017 06:21:28PM 4 points [-]

I've always liked the phrase 'The problem isn't Terminator, it is King Midas. It isn't that AI will suddenly 'decide' to kill us, it is that we will tell it to without realizing it." I forget where I saw that first, but it usually gets the conversation going in the right direction.

Comment author: turchin 25 August 2017 11:03:54AM *  0 points [-]

The same is true for the Terminator plot, where Skynet got a command to self-preserve by all means - and concluded that killing humans will prevent its turning off.

Comment author: WalterL 27 August 2017 10:12:19PM 0 points [-]

I don't remember Skynet getting a command to self preserve by any means. I thought the idea was that it 'became self aware', and reasoned that it had better odds of surviving if it massacred everyone.

Comment author: turchin 27 August 2017 11:13:36PM 0 points [-]

It could be a way to turn the conversation from terminator topic to the value alignment topic without direct confrontation with a person.

Comment author: ChristianKl 21 August 2017 01:09:20PM 4 points [-]

The fact that you engage with the article and share it, might suggest to the author that he did everything right. The idea that your email will discourage the author from writing similar articles might be mistaken.

Secondly, calling autonomous weapons killer robots isn't far of the mark. The policy question of whether or not to allow autonomous weapons is distinct from AGI.

Comment author: moridinamael 21 August 2017 04:52:46PM *  0 points [-]

The type of engagement that the writer of the article wants is the kind the leads to sharing. If Tenoke is specifically stating their intent not to share the content, it's not a viral kind of engagement. There is a big difference between seeing a quote-with-retweet captioned "This is terrible!" and receiving a private email telling them to stop.

Comment author: Tenoke 21 August 2017 01:55:32PM *  0 points [-]

The fact that you engage with the article and share it, might suggest to the author that he did everything right.

True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.

The idea that your email will discourage the author from writing similar articles might be mistaken.

I can see it having no influence on some journalist, but again

I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice..

..

Secondly, calling autonomous weapons killer robots isn't far of the mark.

It's still fairly misleading, although a lot less than in AGI discussions.

The policy question of whether or not to allow autonomous weapons is distinct from AGI.

I am not explicitly talking about AGI either.

Comment author: ChristianKl 21 August 2017 03:03:24PM 2 points [-]

I can see it having no influence on some journalist, but again

My point wasn't that it creates no impact but that you show the journalist by emailing him that his article is engaging. This could encourage him to write more articles like this.